US20120014603A1 - Recognition method and system - Google Patents
Recognition method and system Download PDFInfo
- Publication number
- US20120014603A1 US20120014603A1 US13/243,261 US201113243261A US2012014603A1 US 20120014603 A1 US20120014603 A1 US 20120014603A1 US 201113243261 A US201113243261 A US 201113243261A US 2012014603 A1 US2012014603 A1 US 2012014603A1
- Authority
- US
- United States
- Prior art keywords
- input
- character
- node
- strokes
- characters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012549 training Methods 0.000 claims description 4
- 238000012905 input function Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 10
- 230000014509 gene expression Effects 0.000 description 9
- 230000008901 benefit Effects 0.000 description 6
- 241001422033 Thestylus Species 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000699666 Mus <mouse, genus> Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 239000004816 latex Substances 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012015 optical character recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
- G06V30/36—Matching; Classification
- G06V30/373—Matching; Classification using a special pattern or subpattern alphabet
Definitions
- the present invention relates generally to automated recognition of user input, such as handwriting recognition.
- user input such as handwriting recognition.
- human-machine e.g. human-computer
- keyboards and mice are inconvenient input devices in many applications.
- mobile computing devices often are too small to provide a computer keyboard.
- some languages are not well suited to computer keyboards.
- Off-line handwriting recognition systems are known, sometimes referred to as optical character recognition.
- Off-line handwriting systems are limited in their use, however, since they cannot be used for real time applications, such as instant text messaging. Accordingly, off-line techniques fail to address needs such as mobile computing, instant messaging, and real-time computer input.
- On-line handwriting recognition systems are known which can perform real-time or near real-time character recognition, but these systems are typically quite complex. For example, some known systems use hidden Markov models to represent each character within an alphabet to be recognized. Such systems typically require large amounts of memory and require significant computer processing power to perform character recognition. Hence, these systems can be slow in performing recognition or can have processing and/or memory requirements incompatible with power-limited portable computing applications.
- the invention is directed to a method of recognizing discrete multi-component symbolic input from a user.
- the method can include providing a database of model input sequences, each model input sequence corresponding to a symbol to be recognized. Additional steps of the method can include acquiring an input function from a user, the input function including a time function, and segmenting the input function into a sequence of input components. Another step of the method can include determining at least one hypothesis symbol sequence, wherein the hypothesis symbol sequence is updated for each acquired input component. The update can be based on a comparison of the input component to the database and based on a hypothesis symbol sequence history from at least two components previously in time.
- a method of recognizing handwriting input wherein symbolic characters to be recognized are defined in terms of a sequence of reference stroke components.
- a method of creating a handwriting recognition database includes acquiring spatiotemporal input from a user, separating the spatiotemporal input into discrete input strokes, and storing normalized representations of the discrete input strokes into a database.
- a method of recognizing handwriting includes providing a trellis definition corresponding to a plurality of multi-stroke characters to be recognized, acquiring spatiotemporal input from a user, and defining and updating a plurality of node scores for each input stroke, wherein node scores are advanced non-uniformly in time through the trellis.
- a handwriting recognition system which includes a capture system and a processing system.
- the capture system accepts spatiotemporal input from a user.
- the processing system separates the spatiotemporal input into discrete input strokes, compares the input strokes to a database of model character stroke sequences, and determines a plurality of candidate character sequences.
- a computer readable medium having computer readable program code embodied thereon for recognizing handwritten characters is provided.
- FIG. 1 is a block diagram of a system for handwriting invention in accordance with an embodiment of the present invention
- FIG. 2 is an illustration of a portable device which can be used for handwriting input in accordance with an embodiment of the present invention
- FIG. 3 is a flow chart of a method for recognizing handwritten characters in accordance with an embodiment of the present invention
- FIG. 4 is an illustration of a database of three characters in accordance with an embodiment of the present invention.
- FIG. 5 is an illustration of a trellis definition for the database of FIG. 4 ;
- FIG. 6 is an illustration of the updating of trellis nodes for the trellis of FIG. 5 ;
- FIG. 7 is a flow chart of a method of creating a handwriting recognition database in accordance with an embodiment of the present invention.
- FIG. 8 is an illustration of two different ways of forming a letter
- FIG. 9 is a flow chart of a method of performing handwriting recognition in accordance with an embodiment of the present invention.
- FIG. 10 is an illustration of a mathematical symbol showing special areas of influence in accordance with an embodiment of the present invention.
- FIG. 11 is an illustration of a complex fraction showing recursive parsing in accordance with an embodiment of the present invention.
- stroke refers to a spatiotemporal input from a user, for example, defined as the time position of a pen between a pen down event and a pen up event.
- a stroke may be a dot, line or curve defined by a series of time-tagged coordinates or defined by a mathematical function.
- character refers to a glyph, which may consist of multiple strokes.
- a character may be an English letter, numeral, Chinese character, mathematical symbol, or the like.
- character length refers to the number of strokes that constitute a character. Character length (and the particular strokes making up a character) may be different for different users.
- stroke model refers to a representation of a stroke within a character.
- a stroke model may be a normalized spatiotemporal description of the stroke.
- character model refers to a series of stroke models corresponding to a particular character.
- character model database refers to the collection of character models for a set of characters to be recognized.
- node refers to a node within a trellis representation of a set of characters to be recognized.
- node score refers to a score for a particular node.
- a node score may be a relative likelihood metric.
- candidate character sequence refers to a history of likely characters.
- a candidate character sequence may be associated with each node.
- node hypothesis character refers to a unique character defined for each head node.
- the node can correspond to the hypothesis that the most recently input character is the head node hypothesis character.
- trellis refers to a representation of the possible sequences of characters on a stroke by stroke basis.
- a trellis can be drawn in a graph form, where nodes are connected by lines to indicate possible character sequences.
- the term “about” means quantities, dimensions, sizes, formulations, parameters, shapes and other characteristics do not need not be exact, but may be approximated and/or larger or smaller, as desired, reflecting acceptable tolerances, conversion factors, rounding off, measurement error and the like and other factors known to those of skill in the art.
- Numerical data may be expressed or presented herein in a range format. It is to be understood that such a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited.
- a numerical range of “about 1 to 5” should be interpreted to include not only the explicitly recited values of about 1 to 5, but also include individual values and sub-ranges within the indicated range.
- included in this numerical range are individual values such as 2, 3, and 4 and sub-ranges such as 1-3, 2-4, and 3-5, etc. This same principle applies to ranges reciting only one numerical value and should apply regardless of the breadth of the range or the characteristics being described.
- FIG. 1 Illustrated in FIG. 1 is a system for handwriting recognition in accordance with an embodiment of the present invention.
- the system shown generally at 10 , includes a capture subsystem 12 and a processing subsystem 14 .
- the capture subsystem is configured to accept spatiotemporal input from a user and outputs a digitized representation of the spatiotemporal input.
- the processing subsystem is coupled to the capture system and determines recognized characters from the digitized representation of the spatiotemporal input.
- the processing subsystem can include software to implement a character recognition technique as described further below.
- the processing system can output recognized characters, for example, to a software application or to a display
- a portable device 20 may include a touch screen 22 for handwriting input.
- the portable device may be used to provide instant messaging (e.g., via wireless communications networks).
- the capture subsystem 12 can include the touch screen, which digitizes and outputs a series of time tagged coordinates corresponding to the position of a stylus 24 versus time as the stylus traces a path 26 on the touch pad surface. Recognized characters may be output to a display (e.g. the touch screen) or used for internal applications (e.g. instant messaging).
- FIG. 3 illustrates a flow chart form a method 30 for recognizing handwritten characters in accordance with an embodiment of the present invention.
- the method includes providing 32 a database of characters to be recognized. Characters can be defined as a sequence of written reference strokes. The length of each sequence can be different. For example, English letters are typically formed using about one to three strokes, although individual writers may use more or fewer strokes. In contrast, Chinese characters are typically more complex, and can have as many as about 23 strokes. Although the amount of memory used for the database will therefore vary depending on the alphabet to be recognized, efficient use of memory can be obtained by using a variable amount of memory for each character where only the actual strokes required for each character are stored. Additional detail on the database is provided below.
- Character recognition begins with the step of acquiring 34 written spatiotemporal input from a user.
- the spatiotemporal input can be in the form of discrete input strokes.
- spatiotemporal input can be provided by a touch pad.
- the touch pad may separate the spatiotemporal data into discrete input strokes.
- the method can include determining a two-dimensional position of the stylus tip as a function of time.
- the two-dimensional position can be provided as a series of discrete samples of stylus coordinates as a function of time.
- Discrete input strokes may be delimited by stylus-down and stylus-up events, taking the spatiotemporal input between stylus-down and stylus-up as a single stroke.
- the method can include separating the spatiotemporal input into discrete input strokes.
- Using discrete strokes as input provides greater flexibility in how handwriting is performed on the input device. For example, characters can be formed on top of each other, since the individual strokes can still be identified and separated. Accordingly, the spatial sequencing of input writing can be relaxed. This can also provide advantages when using mixed language applications where input can vary between being left-to-right and right-to-left.
- a handwritten “4” and “9” may look quite similar, but be formed using different strokes by the user.
- the “4” may be formed using two strokes, and the “9” formed using a single stroke. Maintaining the time function information can help to distinguish between these characters.
- a stroke can be defined as the spatiotemporal input over a time period delimited by some other user action, such as touching a particular portion of the touch screen.
- a stroke can be defined as the stylus position observed within a particular area defined on the touch pad, such as a character bounding box.
- Various other ways to define a stroke will occur to one of ordinary skill in the art having possession of this disclosure which can be used in embodiments of the present invention.
- a next step is comparing 36 the discrete input strokes to the database of characters to obtain a comparison result.
- a plurality of character scores can be determined by comparing the discrete input strokes to each character defined in the database.
- Character scores can be obtained from a distance measure between the input strokes and the sequence of reference strokes.
- the input stroke and reference strokes can be viewed as curves in space, defined as a function of time.
- the input stroke and reference stroke can be normalized in one or more spatial dimensions, and also normalized in time.
- Various distance measures are known in the art for determining a distance between two curves, such as the Frechet, Hausdorff, and Euclidean distances.
- the distance measure can also include weighting or averaging. For example, differences between the input and reference stroke at the beginning (and end) of the stroke may be weighted more or less heavily than differences at the middle of the stroke.
- Euclidean distance with elasticity is a specific incarnation of elastic matching.
- Elastic matching is a dynamic programming method which uses an abstract distance measure, such as Euclidean or another definition of distance. Elastic matching can be desirable because it can, to a certain extent, compensate for minor perturbations in letter shape.
- Various different predetermined weighting functions may prove useful in embodiments of the present invention, depending on, for example, user preferences and language characteristics. Average Euclidean distance provides an advantage in that it is linear in the number of points on the curve, whereas most other methods are quadratic.
- Another step of the method 30 includes selecting and updating 38 candidate character sequences based on the comparison results as the input strokes are acquired. For example, a plurality of candidate character sequences can be maintained, one candidate character sequence corresponding to each possible character to be recognized. As input strokes are received, candidate character sequences are updated to append a hypothesized character.
- the candidate character sequences scores can be updated based on the character scores and a history of the candidate character sequence score. For example, the character score for the hypothesized character can be added to a candidate character sequence score from previous in time. The position previous in time is based on the number of strokes of the hypothesized character. Since some characters have more than one stroke, at least one of the candidate character sequences is therefore updated based on a candidate character sequence history derived from at least two sequential strokes previously acquired.
- FIG. 4 illustrates a database of three characters to be recognized, consisting of “A” “B” and “C”.
- the stroke sequence for each character is shown in the table, from which it can be seen that “A” is formed from three strokes, “B” is formed from two strokes, and “C” is formed from a single stroke.
- the character length of A is 3, B is 2, and C is 1.
- FIG. 5 illustrates a trellis definition 50 for this example.
- Three nodes are defined (distributed vertically) corresponding to the three characters to be recognized.
- the trellis definition extends horizontally, each column corresponding to advancing by one stroke. The horizontal axis thus corresponds roughly to time, although not every stroke requires the same amount of time.
- the trellis definition shows all possible node sequences, where each node corresponds to a hypothesized character as a candidate character sequences is built up. It can be seen that some transition paths skip columns, corresponding to multi-stroke characters.
- Each node corresponds to a hypothesis that the most recently acquired strokes corresponds to the node's corresponding character.
- the character corresponding to each node will be referred to as the node hypothesis character.
- the node score can be updated based on how well the last few acquired strokes correspond to the reference strokes of the node hypothesis character.
- the node score can also take into account the node score for the node that was considered most likely before the current character.
- node scores can be updated as follows. Character scores are obtained by comparing the input strokes to the database of reference strokes. For example, a character score for “A” is obtained by comparing the last three input strokes to the three reference strokes in the database for character “A”. Node scores are then updated based on the character scores and node scores from previously in time. For example, for node “A” the node score at time n is based on the most likely node score from time n ⁇ 3 combined with the character score for “A”. For node “B” the node score at time n is based on the most likely node score from time n ⁇ 2 combined with the character score for “B”.
- the node score at time n is based on the most likely node score from time n ⁇ 1 combined with the character score for “C”. For example, the node score from previous in time can be added to the character score to obtain the new node score. Because some of the transitions skip nodes, the node scores are advanced non-uniformly in time through the trellis. Node scores are updated based on node scores from previous in time by a number of strokes corresponding to the node hypothesis character.
- Path history through the trellis can also be maintained in the form of candidate character sequences.
- the candidate character sequences can be updated by appending the node hypothesis character to the candidate character sequence which corresponds to the most likely node from previous in time used in updating the node score.
- FIG. 6 illustrates the updating 60 of trellis nodes for a hypothetical series of input strokes for the above three character example.
- Time is shown in the vertical direction, with the first column in FIG. 6 showing the stroke number.
- the candidate character sequences are initialized to be blank, and the scores are zeroed.
- the character scores are assumed to range between 0 and 1, where 0 is a very poor match, and 1 is a very good match. Other scaling systems can be used in embodiments of the present invention as will occur to one of ordinary skill in the art in possession of this disclosure.
- the character score for C is 0.2.
- the character scores shown in this example have been artificially created, and may not correspond to actual character scores which would be obtained during actual operation using the exemplary input.
- the candidate character sequence for node C has a “C” appended.
- character scores for characters B and C can be obtained.
- the character score for B is obtained by comparing the last two input strokes to the reference strokes for B, and for this example, is equal to 0.6.
- the character score for C is obtained by comparing the last input stroke to the reference stroke for C, which for this example is 0.2. Note that these character scores indicate that the last input stroke is a relatively poor match to a “C” and that the last two input strokes are a reasonable good match to a “B”.
- node score for nodes B and C are updated based on these character scores, and the candidate character sequences updated by appending the node hypothesis character.
- node B at this point in time, the only possibility is that the two input strokes correspond to a single “B” hence node B's score is set to 0.6 (the character score for B) and node B's candidate character sequence has a “B” appended.
- node score for node A is quite a bit larger than that the score for node B and node C. This is as expected, since the actual input in this example is an “A”.
- the next input stroke is the single stroke character C.
- the next stroke input is the beginning of a letter B.
- the character scores are updated, and node scores updated as above. It is interesting to note, that after updating the node scores and candidate character sequences, none of the nodes has the correct character sequence. This is not a problem, however, as future updates may reach back earlier in time to node scores and candidate character sequences which are correct. This is seen in the next update.
- the candidate character sequence for node B thus is set equal to the correct sequence.
- candidate sequences and node scores are updated based on candidate sequences and node scores from previously in time.
- some possible sequences are effectively discarded, helping to keep the number of candidate sequences under consideration manageable.
- the number of different nodes in the trellis (and thus the number of candidate character sequences) under consideration can be set equal to the number of possible characters to be recognized. This is in contrast to a conventional brute force search, for which the number of possible sequences under consideration would grow exponentially with time.
- the required computational processing is relatively simple, consisting mostly of add-compare-store type operations. Accordingly, a relatively simple processor can be used to implement the method. Alternately, the method can also be implemented in hardware, such as in a field programmable gate array or application specific integrated circuit.
- nodes are not advanced uniformly in time through the trellis. As illustrated above, nodes may reach back many strokes in time when updating the node score and candidate character sequence.
- FIGS. 5 and 6 has not made reference to any particular arrangement of the node scores and candidate character sequences in memory. Many different arrangements of these parameters within memory can be used as will occur to one of ordinary skill in the art having possession of this disclosure.
- the method can also include the step of outputting a recognized character.
- recognized character can be output after a predetermined number of strokes, such as the maximum character length. It will be appreciated from the example shown above in FIGS. 4-5 , that after a delay equal to the maximum character length, the candidate character sequences will typically have converged and agree on what characters were input. Accordingly, recognized characters can be taken from one of the candidate character sequences and output after a delay. The delay can be equal to the maximum character length, or some other value. As another example, recognized characters can be output after all of the candidate character sequences agree on the recognized character for a given point in time. Various other approaches for deciding when to output characters will occur to one of ordinary skill in the art in possession of this disclosure.
- the character can be output in various encoding formats known the art.
- characters can be output in 7-bit or 8-bit ASCII as is known in the art.
- characters can be output in UNICODE as is known in the art.
- UNICODE presents advantages in that a large number of alphabets and characters have are defined, although UNICODE encodings require more bits than ASCII.
- Various other encoding formats can also be used as is known in the art.
- the method 30 can be implemented by computer software implemented on a general purpose or specialized processor.
- the invention includes a computer readable medium having computer readable program code embodied thereon for implementing the method.
- the computer readable medium can include code for providing a database of characters to be recognized as discussed further below.
- the computer readable medium can also include program code for acquiring spatiotemporal input from a user interface and outputting discrete input strokes as described above.
- the computer readable medium can also include code for comparing the discrete input strokes to the database to obtain a plurality of character scores as described above.
- the computer readable medium can also include code for determining a plurality of candidate character sequences as described above.
- the computer readable medium can also include computer program code for outputting a recognized character.
- recognized characters may be output to a display, other hardware device, or to other software for further processing.
- the computer readable medium can be a floppy disk, compact disk (CD-ROM), digital video disk (DVD), flash memory (e.g., a flash drive or USB drive), read only memory, or a propagated signal (e.g. Internet communications using the internet protocol), or the like. New types of computer readable medium may also be developed in the future and may also be used to distribute computer software implementing the method.
- CD-ROM compact disk
- DVD digital video disk
- flash memory e.g., a flash drive or USB drive
- a propagated signal e.g. Internet communications using the internet protocol
- providing a database of characters may be performed by distributing a predefined database of characters to be recognized.
- the database may be stored on a computer readable medium such as a floppy disk, compact disk, digital video disk, read only memory, flash memory, or the like.
- providing a database of characters may be performed by a user as will now be described.
- a method of creating a handwriting recognition database is illustrated in flow chart form in FIG. 7 , in accordance with an embodiment of the present invention.
- the method 70 includes acquiring 72 spatiotemporal training input from a user corresponding to an exemplar character, wherein the spatiotemporal training input is provided in the form of discrete input strokes. For example, a user may be prompted to input a keyboard key, ASCII code, UNICODE code, or similar computer-readable designation of a character, and then provide handwriting input on a touch screen or pad corresponding to the character.
- the spatiotemporal input may be provided by a capture subsystem as discrete input strokes, or the spatiotemporal input may be separated into discrete input strokes using processing as described above.
- a next step of the method is normalizing 74 the discrete input strokes into a sequence of normalized representations. Creating normalized representations is helpful in reducing the complexity of performing comparisons of input strokes to the database.
- normalized representations can be performed by determining a non-uniform rational b-spline for each of the discrete input strokes.
- the non-uniform rational b-spline can be scaled to fit between 0 and 1 in all parameters (e.g. time) and coordinates (e.g. x-y values).
- the method also includes storing 76 the normalized representations into the database.
- variable amounts of memory may be used for each character to be recognized to improve memory efficiency.
- a user can create a customized handwriting recognition database tuned to their particular way of writing characters.
- the handwriting recognition database need not be limited to a single alphabet.
- a user can define the database to include characters from mixed and multiple alphabets, such as English characters, Latin characters, Greek characters, Cyrillic characters, Chinese characters, Braille characters, mathematical symbols, and variants and combinations of the above.
- any alphabet which can be represented as combinations of discrete strokes can be included in the database. This can greatly enhance the functionality and utility of devices using the handwriting recognition techniques disclosed herein.
- a user can also create multiple models for an individual character, for example, where a user sometimes generates a character using 1 stroke and sometimes generates a character using 2 strokes.
- more than one normalized representation for a given character may be included in the database.
- Recognition processing can treat the two representations as though they are different characters (although the same output is ultimately produced), or the recognition processing can be simplified to take into account the two different representations.
- a first sequence 80 consists of three strokes 82 , 83 , 84 , and a second sequence 86 consists of two strokes 88 , 89 .
- the two different stroke sequences can be treated like different characters, and a database entry provided for each.
- a more efficient approach is to combine the two different possible ways of making an A into a single node.
- a comparison between two hypotheses is performed.
- One hypothesis is based on the comparison of the last three input strokes to the first sequence 80 in the database combined with the most likely node from three strokes previously in time (since the length of the first sequence is three strokes).
- the other hypothesis is based on the comparison of the last two input strokes to the second sequence 86 in the database combined with the most likely node from two strokes previously in time. The more likely of the two hypotheses is then used, and the node score and candidate character sequence updated accordingly.
- the number of nodes in the trellis does not need to be increased even when multiple different stroke sequences for each character are defined in the database.
- FIG. 9 illustrates a flow chart of a method of performing handwriting recognition in accordance with an embodiment of the present invention.
- the first four steps 92 , 93 , 94 , 95 of the method are the same as described for handwriting recognition, and are described above with reference to FIG. 3 .
- the fifth step 96 of the method includes deciding recognized characters according to a predetermined convergence criteria.
- the predetermined convergence criteria can include waiting a predetermined number of strokes, or waiting until all of the candidate character sequences agree on a character at a given point in time.
- the next step 97 of the method includes determining spatial relationships between the recognized characters.
- spatial relationships can be determined based on the relationships of bounding boxes which circumscribe the characters, based on baseline heights of the characters, or based on special areas of influence defined for particular characters, or combinations of all three.
- certain characters have predefined spatial positions that relate to the character which can be accounted for.
- FIG. 10 illustrates an example of an integral sign 102 , showing a bounding box 104 and baseline 106 for the integral sign. Symbols are included at various positions relative to the integral sign as known in the art, which represent integration limits 108 , 110 and the integrand 112 .
- the character recognition database can include information defining these relative positions, and character recognition take into account the positioning of subsequently drawn characters relative to the integral as defining the integration limits and integrand.
- the order of drawing characters can be defined differently than just described.
- the spatial parsing can also correct errors which occur during recognition. For example, since size is irrelevant when scaling of the character models is performed, symbols such as periods, dots, and commas can be misrecognized.
- the spatial parsing can thus correct these types of errors by taking into account both position and the size of the bounding box.
- fractions can be parsed before plus or minus signs.
- Bounding boxes can be defined as the smallest possible rectangle that still covers all points making up a glyph. For each glyph, the areas of influence can be defined separately.
- FIG. 11 illustrates a complex fraction 120 , which can be parsed recursively as an outer fraction with numerator 122 and denominator 124 , and an inner fraction with numerator 126 and denominator 128 .
- the expression can be output in various formats.
- character output can be provided as a series of ASCII codes, MathType codes, LaTex codes, or the like.
- multi-component input is meant user input of a type which can be segmented into separate components (e.g. strokes in handwriting, phonemes in speech, gestures in image recognition, etc.).
- symbolic is mean that the user input is represented by a machine usable symbol (e.g. a character, word, command, etc.).
- handwriting strokes represent a letter which can be encoded as a symbol within a computer system in ASCII or the like.
- embodiments of the present invention may be applied to speech processing, where speech waveforms are captured and broken into individual phonemes for recognition processing.
- recognition uses a database of model input corresponding to the reference symbols to be recognized.
- written characters can be modeled by a sequence of strokes.
- Spoken words can be modeled as a sequence of phonemes.
- the database can be preprogrammed, or generated by a user by providing exemplary input.
- the model input can be a function of time, such as a spatiotemporal input, a voltage waveform, or the like.
- Recognition can be performed in real time, acquiring an input function from a user, and then determining hypothesis symbols sequences.
- Acquiring the input function may include segmenting the input function into a sequence of input components, such as strokes, phonemes, gestures, and the like.
- updating of hypothesis symbol sequences can be based on hypothesis symbol sequences from previously in time. The updates may look back in time by a number of components equal to the length of the model input sequence for the hypothesis symbol. This may be, for example, two or more components previously in time.
- a technique for machine recognizing discrete multi-component symbolic input from a user has been invented. While described primarily for handwriting recognition, the technique can also be applied to other types of user input, including for example speech. Input from a user which includes a time function is broken into discrete components. By maintaining the time varying aspects, improved recognition can be obtained as compared to static bitmap type recognition. Processing and memory requirements are modest, growing only linearly with the number of symbols to be recognized. The recognition algorithm uses mostly simple add-compare-store type operations. Hence, the recognition technique is compatible with power and processing limited mobile computing applications.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Character Discrimination (AREA)
Abstract
Techniques for recognizing discrete multi-component symbolic input from a user can be applied to, for example, handwriting or speech. The techniques can include providing a database of model input sequences, where each model input sequence corresponds to a symbol to be recognized. Input functions, for example, discrete strokes, are obtained from a user and segmented into a sequence of discrete components. Hypothesis symbol sequences are obtained by comparing the discrete components to a database of symbols to be recognized and updating hypothesis symbol sequences based on the results of the comparison and hypothesis symbol sequence history from input previously acquired in time.
Description
- This application claims the benefit of U.S. Provisional Application No. 60/819,252 entitled “Recognition System and Method” filed Jul. 6, 2006, which is incorporated herein by reference in its entirety.
- The present invention relates generally to automated recognition of user input, such as handwriting recognition. There is a strong desire for automated systems that can recognize user input to improve the quality of human-machine (e.g. human-computer) interfacing. While the computer keyboard and mouse have become ubiquitous, keyboards and mice are inconvenient input devices in many applications. For example, mobile computing devices often are too small to provide a computer keyboard. As another example, some languages are not well suited to computer keyboards.
- For example, some systems cannot accept Chinese characters, and users have been forced to use English or contrived systems such as Pinyin in using these systems. In Pinyin, Chinese characters are represented by phoneticized spellings in the Roman alphabet. Pinyin is cumbersome, however, because it requires the user to learn a new alphabet. Moreover, entering Pinyin is tedious if diacritical marks are included, since this requires additional keystrokes to define the diacritical marks. Diacritical marks are necessary to properly represent tonal information which conveys meaning in Chinese. Other systems allow input of Chinese characters by entering a combination of keyboard codes on a conventional QWERTY keyboard. Using these systems is also cumbersome, because it requires memorizing the sequence of keyboard codes for each character. Unlike keyboard entry, handwriting is quite natural. Accordingly, there exists a strong desire for handwriting recognition systems.
- Various off-line types of handwriting recognition systems are known, sometimes referred to as optical character recognition. Off-line handwriting systems are limited in their use, however, since they cannot be used for real time applications, such as instant text messaging. Accordingly, off-line techniques fail to address needs such as mobile computing, instant messaging, and real-time computer input.
- On-line handwriting recognition systems are known which can perform real-time or near real-time character recognition, but these systems are typically quite complex. For example, some known systems use hidden Markov models to represent each character within an alphabet to be recognized. Such systems typically require large amounts of memory and require significant computer processing power to perform character recognition. Hence, these systems can be slow in performing recognition or can have processing and/or memory requirements incompatible with power-limited portable computing applications.
- Existing handwriting recognition systems have also typically been limited to recognizing characters from a single alphabet. Algorithms have sometimes been optimized for a particular alphabet, and thus cannot handle (or perform poorly) when mixed alphabets are used (e.g., combinations of English and Chinese characters).
- Even greater challenges are presented by the problem of recognizing mathematical expressions. In mathematical expressions, not only are a large number of different symbols and alphabets used, but also the positional relationship of symbols is important as well. Most existing systems for entering mathematical expressions are tedious, requiring either the entry of special codes to define expressions (e.g. as in TeX) or requiring selection of each symbol from a series of menus (e.g. as in MathType).
- Briefly, and in general terms, the invention is directed to a method of recognizing discrete multi-component symbolic input from a user. The method can include providing a database of model input sequences, each model input sequence corresponding to a symbol to be recognized. Additional steps of the method can include acquiring an input function from a user, the input function including a time function, and segmenting the input function into a sequence of input components. Another step of the method can include determining at least one hypothesis symbol sequence, wherein the hypothesis symbol sequence is updated for each acquired input component. The update can be based on a comparison of the input component to the database and based on a hypothesis symbol sequence history from at least two components previously in time.
- In one embodiment, a method of recognizing handwriting input is provided, wherein symbolic characters to be recognized are defined in terms of a sequence of reference stroke components.
- In another embodiment, a method of creating a handwriting recognition database is provided which includes acquiring spatiotemporal input from a user, separating the spatiotemporal input into discrete input strokes, and storing normalized representations of the discrete input strokes into a database.
- In another embodiment, a method of recognizing handwriting is provided which includes providing a trellis definition corresponding to a plurality of multi-stroke characters to be recognized, acquiring spatiotemporal input from a user, and defining and updating a plurality of node scores for each input stroke, wherein node scores are advanced non-uniformly in time through the trellis.
- In another embodiment, a handwriting recognition system is provided which includes a capture system and a processing system. The capture system accepts spatiotemporal input from a user. The processing system separates the spatiotemporal input into discrete input strokes, compares the input strokes to a database of model character stroke sequences, and determines a plurality of candidate character sequences.
- In another embodiment, a computer readable medium having computer readable program code embodied thereon for recognizing handwritten characters is provided.
- Additional features and advantages of the invention will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example, features of the invention; and, wherein:
-
FIG. 1 is a block diagram of a system for handwriting invention in accordance with an embodiment of the present invention; -
FIG. 2 is an illustration of a portable device which can be used for handwriting input in accordance with an embodiment of the present invention; -
FIG. 3 is a flow chart of a method for recognizing handwritten characters in accordance with an embodiment of the present invention; -
FIG. 4 is an illustration of a database of three characters in accordance with an embodiment of the present invention; -
FIG. 5 is an illustration of a trellis definition for the database ofFIG. 4 ; -
FIG. 6 is an illustration of the updating of trellis nodes for the trellis ofFIG. 5 ; -
FIG. 7 is a flow chart of a method of creating a handwriting recognition database in accordance with an embodiment of the present invention; -
FIG. 8 is an illustration of two different ways of forming a letter; -
FIG. 9 is a flow chart of a method of performing handwriting recognition in accordance with an embodiment of the present invention; -
FIG. 10 is an illustration of a mathematical symbol showing special areas of influence in accordance with an embodiment of the present invention; and -
FIG. 11 is an illustration of a complex fraction showing recursive parsing in accordance with an embodiment of the present invention. - Reference will now be made to the exemplary embodiments illustrated, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended.
- In describing the present invention, the following terminology will be used.
- The singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to a stroke includes reference to one or more strokes.
- The term “stroke” refers to a spatiotemporal input from a user, for example, defined as the time position of a pen between a pen down event and a pen up event. A stroke may be a dot, line or curve defined by a series of time-tagged coordinates or defined by a mathematical function.
- The term “character” refers to a glyph, which may consist of multiple strokes. For example, a character may be an English letter, numeral, Chinese character, mathematical symbol, or the like.
- The term “character length” refers to the number of strokes that constitute a character. Character length (and the particular strokes making up a character) may be different for different users.
- The term “stroke model” refers to a representation of a stroke within a character. For example, a stroke model may be a normalized spatiotemporal description of the stroke.
- The term “character model” refers to a series of stroke models corresponding to a particular character.
- The term “character model database” refers to the collection of character models for a set of characters to be recognized.
- The term “node” refers to a node within a trellis representation of a set of characters to be recognized.
- The term “node score” refers to a score for a particular node. For example, a node score may be a relative likelihood metric.
- The term “candidate character sequence” refers to a history of likely characters. A candidate character sequence may be associated with each node.
- The term “node hypothesis character” refers to a unique character defined for each head node. For example, the node can correspond to the hypothesis that the most recently input character is the head node hypothesis character.
- The term “trellis” refers to a representation of the possible sequences of characters on a stroke by stroke basis. A trellis can be drawn in a graph form, where nodes are connected by lines to indicate possible character sequences.
- As used herein, the term “about” means quantities, dimensions, sizes, formulations, parameters, shapes and other characteristics do not need not be exact, but may be approximated and/or larger or smaller, as desired, reflecting acceptable tolerances, conversion factors, rounding off, measurement error and the like and other factors known to those of skill in the art.
- Numerical data may be expressed or presented herein in a range format. It is to be understood that such a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. As an illustration, a numerical range of “about 1 to 5” should be interpreted to include not only the explicitly recited values of about 1 to 5, but also include individual values and sub-ranges within the indicated range. Thus, included in this numerical range are individual values such as 2, 3, and 4 and sub-ranges such as 1-3, 2-4, and 3-5, etc. This same principle applies to ranges reciting only one numerical value and should apply regardless of the breadth of the range or the characteristics being described.
- As used herein, a plurality of items may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary.
- Illustrated in
FIG. 1 is a system for handwriting recognition in accordance with an embodiment of the present invention. The system, shown generally at 10, includes acapture subsystem 12 and aprocessing subsystem 14. The capture subsystem is configured to accept spatiotemporal input from a user and outputs a digitized representation of the spatiotemporal input. The processing subsystem is coupled to the capture system and determines recognized characters from the digitized representation of the spatiotemporal input. The processing subsystem can include software to implement a character recognition technique as described further below. The processing system can output recognized characters, for example, to a software application or to a display - The system can be embedded within a portable electronic device such as a cellular telephone, personal data assistant, or laptop computer, or devices which combine one or more of these functions. For example, as illustrated in
FIG. 2 , aportable device 20 may include atouch screen 22 for handwriting input. The portable device may be used to provide instant messaging (e.g., via wireless communications networks). Thecapture subsystem 12 can include the touch screen, which digitizes and outputs a series of time tagged coordinates corresponding to the position of astylus 24 versus time as the stylus traces apath 26 on the touch pad surface. Recognized characters may be output to a display (e.g. the touch screen) or used for internal applications (e.g. instant messaging). - A technique for recognizing characters will now be described in further detail.
FIG. 3 illustrates a flow chart form amethod 30 for recognizing handwritten characters in accordance with an embodiment of the present invention. The method includes providing 32 a database of characters to be recognized. Characters can be defined as a sequence of written reference strokes. The length of each sequence can be different. For example, English letters are typically formed using about one to three strokes, although individual writers may use more or fewer strokes. In contrast, Chinese characters are typically more complex, and can have as many as about 23 strokes. Although the amount of memory used for the database will therefore vary depending on the alphabet to be recognized, efficient use of memory can be obtained by using a variable amount of memory for each character where only the actual strokes required for each character are stored. Additional detail on the database is provided below. - Character recognition begins with the step of acquiring 34 written spatiotemporal input from a user. The spatiotemporal input can be in the form of discrete input strokes. For example, as described above, spatiotemporal input can be provided by a touch pad. The touch pad may separate the spatiotemporal data into discrete input strokes.
- The method can include determining a two-dimensional position of the stylus tip as a function of time. For example, the two-dimensional position can be provided as a series of discrete samples of stylus coordinates as a function of time. Discrete input strokes may be delimited by stylus-down and stylus-up events, taking the spatiotemporal input between stylus-down and stylus-up as a single stroke. Hence, the method can include separating the spatiotemporal input into discrete input strokes.
- Using discrete strokes as input provides greater flexibility in how handwriting is performed on the input device. For example, characters can be formed on top of each other, since the individual strokes can still be identified and separated. Accordingly, the spatial sequencing of input writing can be relaxed. This can also provide advantages when using mixed language applications where input can vary between being left-to-right and right-to-left.
- Use of spatiotemporal input, rather than static bitmap input (e.g., as in off-line character recognition systems) helps to improve performance because the time function aspect of the handwriting input is helpful in distinguishing different characters from each other. For example, a handwritten “4” and “9” may look quite similar, but be formed using different strokes by the user. For example, the “4” may be formed using two strokes, and the “9” formed using a single stroke. Maintaining the time function information can help to distinguish between these characters.
- Note that temporal sequencing of strokes need not been consistent when writing letters. For example, considering a three stroke character “A.”
Stroke 1 from character A is compared tostrokes quantity 2. the matching is bijective) and using the Munkres' (Hungarian) Algorithm where each entry in the table is found in the same manner as the node score. - While the discussion herein principally addresses the situation where a stroke is defined as extending from a stylus down to a stylus up event, alternate definitions may also be used within embodiments of the present invention. For example, a stroke can be defined as the spatiotemporal input over a time period delimited by some other user action, such as touching a particular portion of the touch screen. As another example, a stroke can be defined as the stylus position observed within a particular area defined on the touch pad, such as a character bounding box. Various other ways to define a stroke will occur to one of ordinary skill in the art having possession of this disclosure which can be used in embodiments of the present invention.
- Continuing the discussion of the
method 30, a next step is comparing 36 the discrete input strokes to the database of characters to obtain a comparison result. For example, a plurality of character scores can be determined by comparing the discrete input strokes to each character defined in the database. Character scores can be obtained from a distance measure between the input strokes and the sequence of reference strokes. For example, the input stroke and reference strokes can be viewed as curves in space, defined as a function of time. The input stroke and reference stroke can be normalized in one or more spatial dimensions, and also normalized in time. Various distance measures are known in the art for determining a distance between two curves, such as the Frechet, Hausdorff, and Euclidean distances. The distance measure can also include weighting or averaging. For example, differences between the input and reference stroke at the beginning (and end) of the stroke may be weighted more or less heavily than differences at the middle of the stroke. Euclidean distance with elasticity is a specific incarnation of elastic matching. Elastic matching is a dynamic programming method which uses an abstract distance measure, such as Euclidean or another definition of distance. Elastic matching can be desirable because it can, to a certain extent, compensate for minor perturbations in letter shape. Various different predetermined weighting functions may prove useful in embodiments of the present invention, depending on, for example, user preferences and language characteristics. Average Euclidean distance provides an advantage in that it is linear in the number of points on the curve, whereas most other methods are quadratic. - Another step of the
method 30 includes selecting and updating 38 candidate character sequences based on the comparison results as the input strokes are acquired. For example, a plurality of candidate character sequences can be maintained, one candidate character sequence corresponding to each possible character to be recognized. As input strokes are received, candidate character sequences are updated to append a hypothesized character. The candidate character sequences scores can be updated based on the character scores and a history of the candidate character sequence score. For example, the character score for the hypothesized character can be added to a candidate character sequence score from previous in time. The position previous in time is based on the number of strokes of the hypothesized character. Since some characters have more than one stroke, at least one of the candidate character sequences is therefore updated based on a candidate character sequence history derived from at least two sequential strokes previously acquired. - It will be appreciated that memory requirements are relatively modest when the number of candidate character sequences that are maintained is equal to the number of possible characters to be recognized. Hence, the memory requirements and complexity of the method grow linearly as the number of characters in the database is increased. This is in contrast to some prior art recognition systems that maintain a large number of state possibilities greatly in excess of the number of input characters than can be recognized.
- Updating of the candidate character sequences can be explained further in reference to
FIGS. 4 and 5 .FIG. 4 illustrates a database of three characters to be recognized, consisting of “A” “B” and “C”. The stroke sequence for each character is shown in the table, from which it can be seen that “A” is formed from three strokes, “B” is formed from two strokes, and “C” is formed from a single stroke. Hence, the character length of A is 3, B is 2, and C is 1. -
FIG. 5 illustrates atrellis definition 50 for this example. Three nodes are defined (distributed vertically) corresponding to the three characters to be recognized. The trellis definition extends horizontally, each column corresponding to advancing by one stroke. The horizontal axis thus corresponds roughly to time, although not every stroke requires the same amount of time. The trellis definition shows all possible node sequences, where each node corresponds to a hypothesized character as a candidate character sequences is built up. It can be seen that some transition paths skip columns, corresponding to multi-stroke characters. - Recognition thus proceeds as follows. Each node corresponds to a hypothesis that the most recently acquired strokes corresponds to the node's corresponding character. For ease of reference, the character corresponding to each node will be referred to as the node hypothesis character. Thus, the node score can be updated based on how well the last few acquired strokes correspond to the reference strokes of the node hypothesis character. The node score can also take into account the node score for the node that was considered most likely before the current character.
- More particularly, at time n, node scores can be updated as follows. Character scores are obtained by comparing the input strokes to the database of reference strokes. For example, a character score for “A” is obtained by comparing the last three input strokes to the three reference strokes in the database for character “A”. Node scores are then updated based on the character scores and node scores from previously in time. For example, for node “A” the node score at time n is based on the most likely node score from time n−3 combined with the character score for “A”. For node “B” the node score at time n is based on the most likely node score from time n−2 combined with the character score for “B”. Finally, for node “C” the node score at time n is based on the most likely node score from time n−1 combined with the character score for “C”. For example, the node score from previous in time can be added to the character score to obtain the new node score. Because some of the transitions skip nodes, the node scores are advanced non-uniformly in time through the trellis. Node scores are updated based on node scores from previous in time by a number of strokes corresponding to the node hypothesis character.
- Path history through the trellis can also be maintained in the form of candidate character sequences. When the node scores are updated, the candidate character sequences can be updated by appending the node hypothesis character to the candidate character sequence which corresponds to the most likely node from previous in time used in updating the node score.
- A more detailed example is provided by
FIG. 6 which illustrates the updating 60 of trellis nodes for a hypothetical series of input strokes for the above three character example. Time is shown in the vertical direction, with the first column inFIG. 6 showing the stroke number. The actual input sequence from the user is “ACBCCABC” which is a total of 14 strokes, and time is shown running from t=1 (the end of the first stroke) to t=14 (the end of the 14th stroke). - As each input stroke is received, new character scores are computed for all three characters as shown in
FIG. 6 by the columns labeled “Character Scores.” Following computation of the character scores, node scores (labeled “Score”) and candidate character sequences (labeled “Sequence”) are then updated as described above as will now be detailed. - For time t=0, the candidate character sequences are initialized to be blank, and the scores are zeroed. At time t=1, corresponding to the first stroke, only a character score for character C can be obtained, since A and B require more than one stroke. For illustration purposes, the character scores are assumed to range between 0 and 1, where 0 is a very poor match, and 1 is a very good match. Other scaling systems can be used in embodiments of the present invention as will occur to one of ordinary skill in the art in possession of this disclosure. The character score for C is 0.2. The character scores shown in this example have been artificially created, and may not correspond to actual character scores which would be obtained during actual operation using the exemplary input.
- At time t=1, only the score for node C can be updated, since there have not been enough input strokes received for an “A” or a “B”. Hence, the score for node C is updated by setting it equal to the character score for “C”. Hence, node C has a score of 0.2 at time t=1. The candidate character sequence for node C has a “C” appended.
- At time t=2, character scores for characters B and C can be obtained. The character score for B is obtained by comparing the last two input strokes to the reference strokes for B, and for this example, is equal to 0.6. The character score for C is obtained by comparing the last input stroke to the reference stroke for C, which for this example is 0.2. Note that these character scores indicate that the last input stroke is a relatively poor match to a “C” and that the last two input strokes are a reasonable good match to a “B”.
- The node score for nodes B and C are updated based on these character scores, and the candidate character sequences updated by appending the node hypothesis character. For node B, at this point in time, the only possibility is that the two input strokes correspond to a single “B” hence node B's score is set to 0.6 (the character score for B) and node B's candidate character sequence has a “B” appended. For node C, at this point in time, the only possibility is that the two input strokes correspond to a “C” followed by a “C.” Hence, node C's score is set equal to 0.2+0.2=0.4 (the character score for C, plus node C's score from time t=1).
- At time t=3, character scores for all three characters are obtained. Note that the character score for A is quite large, since the input strokes are an excellent match to the reference strokes. The node scores are updated. For node A, since this is the first character, the node score is the character score, 1.9. For node B, the only possible previous character is a C, so the node score is based on the node score for node C from t=1 (equal to 0.2) plus the B character score (equal to 0.5) to yield a score of 0.2+0.5=0.7. For node C, the previous character could be either a B or C at time t=2. The most likely node is used, which in this example is node B. Hence, the C node's score is 0.0 (the C character score)+0.6 (node B's score at t=2)=0.6. Node C's candidate sequence is obtained by appending a C to the candidate sequence from node B at time t=2 to yield BC.
- After completion of updates for time t=3, it can be seen that the node score for node A is quite a bit larger than that the score for node B and node C. This is as expected, since the actual input in this example is an “A”.
- At time t=4, the next input stroke is the single stroke character C. The character score for character C is accordingly relatively large. Updates of node score and candidate character sequences are performed. The update for node C selects node A from t=3 since it was the most likely node at that time, and appends the hypothesis character C to the candidate sequence from node A. Updates for nodes A and B follow the same algorithm as described above. While the resulting candidate character sequences for node A and B do not correspond to the actual input sequence, it can be seen that the node scores are relatively low. It will be seen later in this example that the correct candidate character sequence will eventually be selected for these nodes as more input is received.
- At time t=5, the next stroke input is the beginning of a letter B. The character scores are updated, and node scores updated as above. It is interesting to note, that after updating the node scores and candidate character sequences, none of the nodes has the correct character sequence. This is not a problem, however, as future updates may reach back earlier in time to node scores and candidate character sequences which are correct. This is seen in the next update. At time t=6, node B reaches back to time t=4, and selects node C as the most likely. The candidate character sequence for node B thus is set equal to the correct sequence.
- Additional input strokes for time t=7 to t=14 are shown in
FIG. 6 , and the resulting updated node scores and candidate character sequences. These updates will not be discussed in detail as how they are computed will be apparent from the above. It can be seen that node candidate character sequences tend to converge to the correct sequence which corresponds to the actual input sequence. - It will now be appreciated, that as each input stroke is received, candidate sequences and node scores are updated based on candidate sequences and node scores from previously in time. As each node's candidate sequence is updated, some possible sequences are effectively discarded, helping to keep the number of candidate sequences under consideration manageable. In particularly, the number of different nodes in the trellis (and thus the number of candidate character sequences) under consideration can be set equal to the number of possible characters to be recognized. This is in contrast to a conventional brute force search, for which the number of possible sequences under consideration would grow exponentially with time.
- It should also be appreciated that the required computational processing is relatively simple, consisting mostly of add-compare-store type operations. Accordingly, a relatively simple processor can be used to implement the method. Alternately, the method can also be implemented in hardware, such as in a field programmable gate array or application specific integrated circuit.
- The implementation detail just described is similar to the Viterbi algorithm as known in the art. Unlike a conventional Viterbi algorithm, however, the nodes are not advanced uniformly in time through the trellis. As illustrated above, nodes may reach back many strokes in time when updating the node score and candidate character sequence.
- The above discussion with respect to
FIGS. 5 and 6 has not made reference to any particular arrangement of the node scores and candidate character sequences in memory. Many different arrangements of these parameters within memory can be used as will occur to one of ordinary skill in the art having possession of this disclosure. - Returning to the discussion of the method 30 (
FIG. 3 ), the method can also include the step of outputting a recognized character. For example, recognized character can be output after a predetermined number of strokes, such as the maximum character length. It will be appreciated from the example shown above inFIGS. 4-5 , that after a delay equal to the maximum character length, the candidate character sequences will typically have converged and agree on what characters were input. Accordingly, recognized characters can be taken from one of the candidate character sequences and output after a delay. The delay can be equal to the maximum character length, or some other value. As another example, recognized characters can be output after all of the candidate character sequences agree on the recognized character for a given point in time. Various other approaches for deciding when to output characters will occur to one of ordinary skill in the art in possession of this disclosure. - Output of recognized characters can be illustrated with reference to
FIG. 6 . It can be seen that, at time t=3, the first complete character has been received. After a delay of three more strokes, at time t=6, all three node's candidate character sequences have been updated to the point that they are in agreement that the first character is an “A”. - It should be appreciated that the output of characters can be quite rapid. As illustrated by
FIG. 6 , convergence is quite rapid, and thus recognized characters can be output with relatively little delay. This provides advantages in real time applications, both in responsiveness of the system to the user, as well as allowing the user to quickly correct errors in the recognition process. - The character can be output in various encoding formats known the art. For example, characters can be output in 7-bit or 8-bit ASCII as is known in the art. As another example, characters can be output in UNICODE as is known in the art. UNICODE presents advantages in that a large number of alphabets and characters have are defined, although UNICODE encodings require more bits than ASCII. Various other encoding formats can also be used as is known in the art.
- The
method 30 can be implemented by computer software implemented on a general purpose or specialized processor. Accordingly, in an embodiment, the invention includes a computer readable medium having computer readable program code embodied thereon for implementing the method. For example, the computer readable medium can include code for providing a database of characters to be recognized as discussed further below. The computer readable medium can also include program code for acquiring spatiotemporal input from a user interface and outputting discrete input strokes as described above. The computer readable medium can also include code for comparing the discrete input strokes to the database to obtain a plurality of character scores as described above. The computer readable medium can also include code for determining a plurality of candidate character sequences as described above. The computer readable medium can also include computer program code for outputting a recognized character. For example, recognized characters may be output to a display, other hardware device, or to other software for further processing. - Various types of computer readable medium are known in the art which can be used. For example, the computer readable medium can be a floppy disk, compact disk (CD-ROM), digital video disk (DVD), flash memory (e.g., a flash drive or USB drive), read only memory, or a propagated signal (e.g. Internet communications using the internet protocol), or the like. New types of computer readable medium may also be developed in the future and may also be used to distribute computer software implementing the method.
- The database of characters to be recognized will now be discussed in further detail. In an embodiment of the invention, providing a database of characters may be performed by distributing a predefined database of characters to be recognized. For example, the database may be stored on a computer readable medium such as a floppy disk, compact disk, digital video disk, read only memory, flash memory, or the like. In another embodiment, providing a database of characters may be performed by a user as will now be described.
- A method of creating a handwriting recognition database is illustrated in flow chart form in
FIG. 7 , in accordance with an embodiment of the present invention. Themethod 70 includes acquiring 72 spatiotemporal training input from a user corresponding to an exemplar character, wherein the spatiotemporal training input is provided in the form of discrete input strokes. For example, a user may be prompted to input a keyboard key, ASCII code, UNICODE code, or similar computer-readable designation of a character, and then provide handwriting input on a touch screen or pad corresponding to the character. The spatiotemporal input may be provided by a capture subsystem as discrete input strokes, or the spatiotemporal input may be separated into discrete input strokes using processing as described above. - A next step of the method is normalizing 74 the discrete input strokes into a sequence of normalized representations. Creating normalized representations is helpful in reducing the complexity of performing comparisons of input strokes to the database. In an embodiment, normalized representations can be performed by determining a non-uniform rational b-spline for each of the discrete input strokes. The non-uniform rational b-spline can be scaled to fit between 0 and 1 in all parameters (e.g. time) and coordinates (e.g. x-y values).
- The method also includes storing 76 the normalized representations into the database. As discussed above, variable amounts of memory may be used for each character to be recognized to improve memory efficiency.
- Using the
method 70, a user can create a customized handwriting recognition database tuned to their particular way of writing characters. The handwriting recognition database need not be limited to a single alphabet. For example, a user can define the database to include characters from mixed and multiple alphabets, such as English characters, Latin characters, Greek characters, Cyrillic characters, Chinese characters, Braille characters, mathematical symbols, and variants and combinations of the above. In general, virtually any alphabet which can be represented as combinations of discrete strokes can be included in the database. This can greatly enhance the functionality and utility of devices using the handwriting recognition techniques disclosed herein. - A user can also create multiple models for an individual character, for example, where a user sometimes generates a character using 1 stroke and sometimes generates a character using 2 strokes. Thus, more than one normalized representation for a given character may be included in the database. Recognition processing can treat the two representations as though they are different characters (although the same output is ultimately produced), or the recognition processing can be simplified to take into account the two different representations.
- For example, suppose a user has two different stroke sequences for making an “A” as illustrated in
FIG. 8 . Afirst sequence 80 consists of threestrokes second sequence 86 consists of twostrokes - A more efficient approach, however, is to combine the two different possible ways of making an A into a single node. Thus, when the node corresponding to A is updated, a comparison between two hypotheses is performed. One hypothesis is based on the comparison of the last three input strokes to the
first sequence 80 in the database combined with the most likely node from three strokes previously in time (since the length of the first sequence is three strokes). The other hypothesis is based on the comparison of the last two input strokes to thesecond sequence 86 in the database combined with the most likely node from two strokes previously in time. The more likely of the two hypotheses is then used, and the node score and candidate character sequence updated accordingly. Using this approach, the number of nodes in the trellis does not need to be increased even when multiple different stroke sequences for each character are defined in the database. - Additional processing may be performed when performing recognition of mathematical expressions. Mathematical expressions often include superscripts, subscripts, and other positional relationships between characters which carry important information.
FIG. 9 illustrates a flow chart of a method of performing handwriting recognition in accordance with an embodiment of the present invention. The first foursteps FIG. 3 . Thefifth step 96 of the method includes deciding recognized characters according to a predetermined convergence criteria. For example, as described above, the predetermined convergence criteria can include waiting a predetermined number of strokes, or waiting until all of the candidate character sequences agree on a character at a given point in time. - The
next step 97 of the method includes determining spatial relationships between the recognized characters. For example, spatial relationships can be determined based on the relationships of bounding boxes which circumscribe the characters, based on baseline heights of the characters, or based on special areas of influence defined for particular characters, or combinations of all three. For example, certain characters have predefined spatial positions that relate to the character which can be accounted for.FIG. 10 illustrates an example of anintegral sign 102, showing abounding box 104 andbaseline 106 for the integral sign. Symbols are included at various positions relative to the integral sign as known in the art, which representintegration limits integrand 112. Accordingly, the character recognition database can include information defining these relative positions, and character recognition take into account the positioning of subsequently drawn characters relative to the integral as defining the integration limits and integrand. Of course, the order of drawing characters can be defined differently than just described. - The spatial parsing can also correct errors which occur during recognition. For example, since size is irrelevant when scaling of the character models is performed, symbols such as periods, dots, and commas can be misrecognized. The spatial parsing can thus correct these types of errors by taking into account both position and the size of the bounding box.
- Various orders for performing the spatial parsing can be used. For example, fractions can be parsed before plus or minus signs. Bounding boxes can be defined as the smallest possible rectangle that still covers all points making up a glyph. For each glyph, the areas of influence can be defined separately.
- As another example,
FIG. 11 illustrates acomplex fraction 120, which can be parsed recursively as an outer fraction withnumerator 122 anddenominator 124, and an inner fraction withnumerator 126 anddenominator 128. - Once a mathematical expression has been recognized, the expression can be output in various formats. For example, character output can be provided as a series of ASCII codes, MathType codes, LaTex codes, or the like.
- While the foregoing discussion has focused principally on handwriting recognition, embodiments of the present invention can be used for other types of machine recognition of discrete multi-component symbolic input. By multi-component input is meant user input of a type which can be segmented into separate components (e.g. strokes in handwriting, phonemes in speech, gestures in image recognition, etc.). By symbolic is mean that the user input is represented by a machine usable symbol (e.g. a character, word, command, etc.). For example, as described above, handwriting strokes represent a letter which can be encoded as a symbol within a computer system in ASCII or the like. Similarly, embodiments of the present invention may be applied to speech processing, where speech waveforms are captured and broken into individual phonemes for recognition processing.
- As described above, recognition uses a database of model input corresponding to the reference symbols to be recognized. For example, written characters can be modeled by a sequence of strokes. Spoken words can be modeled as a sequence of phonemes. The database can be preprogrammed, or generated by a user by providing exemplary input. The model input can be a function of time, such as a spatiotemporal input, a voltage waveform, or the like.
- Recognition can be performed in real time, acquiring an input function from a user, and then determining hypothesis symbols sequences. Acquiring the input function may include segmenting the input function into a sequence of input components, such as strokes, phonemes, gestures, and the like. As discussed above, since the model input for a symbol may have multiple input components, updating of hypothesis symbol sequences can be based on hypothesis symbol sequences from previously in time. The updates may look back in time by a number of components equal to the length of the model input sequence for the hypothesis symbol. This may be, for example, two or more components previously in time.
- Summarizing, and reiterating to some extent, a technique for machine recognizing discrete multi-component symbolic input from a user has been invented. While described primarily for handwriting recognition, the technique can also be applied to other types of user input, including for example speech. Input from a user which includes a time function is broken into discrete components. By maintaining the time varying aspects, improved recognition can be obtained as compared to static bitmap type recognition. Processing and memory requirements are modest, growing only linearly with the number of symbols to be recognized. The recognition algorithm uses mostly simple add-compare-store type operations. Hence, the recognition technique is compatible with power and processing limited mobile computing applications.
- While the forgoing examples are illustrative of the principles of the present invention in one or more particular applications, it will be apparent to those of ordinary skill in the art that numerous modifications in form, usage and details of implementation can be made without the exercise of inventive faculty, and without departing from the principles and concepts of the invention.
Claims (9)
1-22. (canceled)
23. A method (70) of creating a handwriting recognition database comprising the steps of:
a) acquiring (72) spatiotemporal_training input from a user corresponding to an exemplar character, wherein the spatiotemporal input is provided in the form of discrete input strokes;
b) normalizing (74) the discrete input strokes into a sequence of normalized representations; and
c) storing (76) the normalized representations into the database.
24. The method of claim 23 , wherein the step of acquiring training spatiotemporal input comprises separating the spatiotemporal input into the discrete input strokes.
25. The method of claim 23 , wherein the step of normalizing the discrete input strokes comprises forming a non-uniform rational b-spline corresponding to the input stroke.
26. The method of claim 23 , wherein the non-uniform rational b-spline is normalized to scale it to fit between 0 and 1 in all parameters and coordinates.
27. The method of claim 23 , further comprising storing more than one normalized representation for a given exemplar character within the database.
28. A method of recognizing handwritten input, comprising:
a) providing a trellis definition (50) having a plurality of nodes corresponding to a plurality of written_characters to be recognized, each character defined by at least one discrete stroke element;
b) acquiring (34) written_spatiotemporal input from a user in the form of discrete input strokes; and
c) defining and updating a plurality of node scores (60) for each discrete input stroke, wherein the node scores are advanced non-uniformly in time through the trellis.
29. The method of claim 28 wherein the step of updating a plurality of node scores comprises updating each node score based on a node score obtained previously in time from a number of strokes corresponding to a node hypothesis character.
30-37. (canceled)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/243,261 US20120014603A1 (en) | 2006-07-06 | 2011-09-23 | Recognition method and system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US81925206P | 2006-07-06 | 2006-07-06 | |
US11/825,471 US8050500B1 (en) | 2006-07-06 | 2007-07-06 | Recognition method and system |
US13/243,261 US20120014603A1 (en) | 2006-07-06 | 2011-09-23 | Recognition method and system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/825,471 Division US8050500B1 (en) | 2006-07-06 | 2007-07-06 | Recognition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120014603A1 true US20120014603A1 (en) | 2012-01-19 |
Family
ID=44839643
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/825,471 Expired - Fee Related US8050500B1 (en) | 2006-07-06 | 2007-07-06 | Recognition method and system |
US13/243,261 Abandoned US20120014603A1 (en) | 2006-07-06 | 2011-09-23 | Recognition method and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/825,471 Expired - Fee Related US8050500B1 (en) | 2006-07-06 | 2007-07-06 | Recognition method and system |
Country Status (1)
Country | Link |
---|---|
US (2) | US8050500B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218199A (en) * | 2013-02-26 | 2013-07-24 | 马骏 | Phonetic input method with identification code input function |
CN107330379A (en) * | 2017-06-13 | 2017-11-07 | 内蒙古大学 | A kind of Mongol hand-written recognition method and device |
US20220314637A1 (en) * | 2019-11-19 | 2022-10-06 | Xi'an Jiaotong University | Braille printing method and system thereof |
Families Citing this family (195)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
JP4775462B2 (en) * | 2009-03-12 | 2011-09-21 | カシオ計算機株式会社 | Computer and program |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8977584B2 (en) | 2010-01-25 | 2015-03-10 | Newvaluexchange Global Ai Llp | Apparatuses, methods and systems for a digital conversation management platform |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
JP2012079252A (en) * | 2010-10-06 | 2012-04-19 | Fujitsu Ltd | Information terminal, character input method and character input program |
JP5699570B2 (en) * | 2010-11-30 | 2015-04-15 | 富士ゼロックス株式会社 | Image processing apparatus and image processing program |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US8989492B2 (en) * | 2011-06-03 | 2015-03-24 | Apple Inc. | Multi-resolution spatial feature extraction for automatic handwriting recognition |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
KR101898202B1 (en) * | 2012-02-09 | 2018-09-12 | 삼성전자주식회사 | Apparatus and method for guiding writing input for recognation of writing |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
JP2014078168A (en) * | 2012-10-11 | 2014-05-01 | Fuji Xerox Co Ltd | Character recognition apparatus and program |
CN102968619B (en) * | 2012-11-13 | 2015-06-17 | 北京航空航天大学 | Recognition method for components of Chinese character pictures |
CN113470640B (en) | 2013-02-07 | 2022-04-26 | 苹果公司 | Voice triggers for digital assistants |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
WO2014144949A2 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | Training an at least partial voice command system |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
JP6038700B2 (en) * | 2013-03-25 | 2016-12-07 | 株式会社東芝 | Shaping device |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
CN110442699A (en) | 2013-06-09 | 2019-11-12 | 苹果公司 | Operate method, computer-readable medium, electronic equipment and the system of digital assistants |
AU2014278595B2 (en) | 2013-06-13 | 2017-04-06 | Apple Inc. | System and method for emergency calls initiated by voice command |
DE112014003653B4 (en) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatically activate intelligent responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
TWI566107B (en) | 2014-05-30 | 2017-01-11 | 蘋果公司 | Method for processing a multi-part voice command, non-transitory computer readable storage medium and electronic device |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10146429B2 (en) | 2016-06-10 | 2018-12-04 | Apple Inc. | Character recognition method, apparatus and device |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10163004B2 (en) * | 2017-03-30 | 2018-12-25 | Konica Minolta Laboratory U.S.A., Inc. | Inferring stroke information from an image |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK201770427A1 (en) | 2017-05-12 | 2018-12-20 | Apple Inc. | Low-latency intelligent automated assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11076039B2 (en) | 2018-06-03 | 2021-07-27 | Apple Inc. | Accelerated task performance |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11388142B2 (en) * | 2019-01-15 | 2022-07-12 | Infoblox Inc. | Detecting homographs of domain names |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | USER ACTIVITY SHORTCUT SUGGESTIONS |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
CN111310548B (en) * | 2019-12-04 | 2023-09-19 | 武汉汉德瑞庭科技有限公司 | Method for identifying stroke types in online handwriting |
US11270104B2 (en) | 2020-01-13 | 2022-03-08 | Apple Inc. | Spatial and temporal sequence-to-sequence modeling for handwriting recognition |
US11043220B1 (en) | 2020-05-11 | 2021-06-22 | Apple Inc. | Digital assistant hardware abstraction |
JP2023140051A (en) * | 2022-03-22 | 2023-10-04 | 富士フイルムビジネスイノベーション株式会社 | Information processing device and information processing program |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5768423A (en) * | 1994-09-02 | 1998-06-16 | Panasonic Technologies Inc. | Trie structure based method and apparatus for indexing and searching handwritten databases with dynamic search sequencing |
US20050100214A1 (en) * | 2003-11-10 | 2005-05-12 | Microsoft Corporation | Stroke segmentation for template-based cursive handwriting recognition |
US20050111736A1 (en) * | 2002-02-08 | 2005-05-26 | Microsoft Corporation | Ink gestures |
US6956969B2 (en) * | 1996-05-23 | 2005-10-18 | Apple Computer, Inc. | Methods and apparatuses for handwriting recognition |
US20060045337A1 (en) * | 2004-08-26 | 2006-03-02 | Microsoft Corporation | Spatial recognition and grouping of text and graphics |
US7502509B2 (en) * | 2006-05-12 | 2009-03-10 | Velosum, Inc. | Systems and methods for digital pen stroke correction |
US7505041B2 (en) * | 2004-01-26 | 2009-03-17 | Microsoft Corporation | Iteratively solving constraints in a font-hinting language |
US20090136136A1 (en) * | 2005-02-15 | 2009-05-28 | Kite Image Technologies Inc. | Method for handwritten character recognition, system for handwritten character recognition, program for handwritten character recognition and storing medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5544257A (en) | 1992-01-08 | 1996-08-06 | International Business Machines Corporation | Continuous parameter hidden Markov model approach to automatic handwriting recognition |
JP2000502479A (en) | 1996-10-04 | 2000-02-29 | フィリップス エレクトロニクス ネムローゼ フェンノートシャップ | Online Handwritten Character Recognition Method and Apparatus Based on Feature Vectors Using Aggregation Observation Extracted from Time Series Frames |
US6111985A (en) | 1997-06-06 | 2000-08-29 | Microsoft Corporation | Method and mechanism for providing partial results in full context handwriting recognition |
CN1156741C (en) | 1998-04-16 | 2004-07-07 | 国际商业机器公司 | Chinese handwriting identifying method and device |
US6285786B1 (en) | 1998-04-30 | 2001-09-04 | Motorola, Inc. | Text recognizer and method using non-cumulative character scoring in a forward search |
US6567548B2 (en) | 1999-01-29 | 2003-05-20 | International Business Machines Corporation | Handwriting recognition system and method using compound characters for improved recognition accuracy |
US7054810B2 (en) | 2000-10-06 | 2006-05-30 | International Business Machines Corporation | Feature vector-based apparatus and method for robust pattern recognition |
US20050175242A1 (en) | 2003-04-24 | 2005-08-11 | Fujitsu Limited | Online handwritten character input device and method |
US7184591B2 (en) | 2003-05-21 | 2007-02-27 | Microsoft Corporation | Systems and methods for adaptive handwriting recognition |
-
2007
- 2007-07-06 US US11/825,471 patent/US8050500B1/en not_active Expired - Fee Related
-
2011
- 2011-09-23 US US13/243,261 patent/US20120014603A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5768423A (en) * | 1994-09-02 | 1998-06-16 | Panasonic Technologies Inc. | Trie structure based method and apparatus for indexing and searching handwritten databases with dynamic search sequencing |
US6956969B2 (en) * | 1996-05-23 | 2005-10-18 | Apple Computer, Inc. | Methods and apparatuses for handwriting recognition |
US20050111736A1 (en) * | 2002-02-08 | 2005-05-26 | Microsoft Corporation | Ink gestures |
US20050100214A1 (en) * | 2003-11-10 | 2005-05-12 | Microsoft Corporation | Stroke segmentation for template-based cursive handwriting recognition |
US7505041B2 (en) * | 2004-01-26 | 2009-03-17 | Microsoft Corporation | Iteratively solving constraints in a font-hinting language |
US20060045337A1 (en) * | 2004-08-26 | 2006-03-02 | Microsoft Corporation | Spatial recognition and grouping of text and graphics |
US20090136136A1 (en) * | 2005-02-15 | 2009-05-28 | Kite Image Technologies Inc. | Method for handwritten character recognition, system for handwritten character recognition, program for handwritten character recognition and storing medium |
US7502509B2 (en) * | 2006-05-12 | 2009-03-10 | Velosum, Inc. | Systems and methods for digital pen stroke correction |
Non-Patent Citations (1)
Title |
---|
Lu et al. "Extraction and Optimization of B-Spline PBD Templates for Recognition of Connected Handwritten Digit Strings" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pages 132-139, January 2002 ("Lu") * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218199A (en) * | 2013-02-26 | 2013-07-24 | 马骏 | Phonetic input method with identification code input function |
CN107330379A (en) * | 2017-06-13 | 2017-11-07 | 内蒙古大学 | A kind of Mongol hand-written recognition method and device |
US20220314637A1 (en) * | 2019-11-19 | 2022-10-06 | Xi'an Jiaotong University | Braille printing method and system thereof |
Also Published As
Publication number | Publication date |
---|---|
US8050500B1 (en) | 2011-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8050500B1 (en) | Recognition method and system | |
Tappert | Cursive script recognition by elastic matching | |
JP2669583B2 (en) | Computer-based method and system for handwriting recognition | |
Lin et al. | Style-preserving English handwriting synthesis | |
AU737039B2 (en) | Methods and apparatuses for handwriting recognition | |
KR101457456B1 (en) | Apparatus and Method of personal font generation | |
US9711117B2 (en) | Method and apparatus for recognising music symbols | |
WO1995008158A1 (en) | Universal symbolic handwriting recognition system | |
CN102708862B (en) | Touch-assisted real-time speech recognition system and real-time speech/action synchronous decoding method thereof | |
EP1854048A1 (en) | Recognition graph | |
US11393231B2 (en) | System and method for text line extraction | |
US20230096728A1 (en) | System and method for text line and text block extraction | |
JP2012043385A (en) | Character recognition device and character recognition method | |
CN101315666A (en) | A multi-channel handwritten Chinese error correction method based on speech | |
CN107912062B (en) | System and method for overlaying handwriting | |
Vuori et al. | Influence of erroneous learning samples on adaptation in on-line handwriting recognition | |
Kumar et al. | Online Devanagari isolated character recognition for the iPhone using Hidden Markov Models | |
JP2020013460A (en) | Information processing apparatus, character recognition method, and character recognition program | |
CN113176830A (en) | Recognition model training method, recognition device, electronic equipment and storage medium | |
AU2020103527A4 (en) | IPDN- Read Handwriting: Intelligent Process to Read Handwriting Using Deep Learning and Neural Networks | |
US20240331428A1 (en) | Parallel processing of extracted elements | |
Yu et al. | Statistical Structure Modeling and Optimal Combined Strategy Based Chinese Components Recognition | |
AU764561B2 (en) | Method and apparatuses for handwriting recognition | |
Deufemia et al. | A dynamic stroke segmentation technique for sketched symbol recognition | |
CN110647245A (en) | Handwriting input method based on DTW algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |