[go: up one dir, main page]

WO2025031608A1 - Enhanced spell checking and auto-completion for text that is handwritten on a computer device - Google Patents

Enhanced spell checking and auto-completion for text that is handwritten on a computer device Download PDF

Info

Publication number
WO2025031608A1
WO2025031608A1 PCT/EP2023/085637 EP2023085637W WO2025031608A1 WO 2025031608 A1 WO2025031608 A1 WO 2025031608A1 EP 2023085637 W EP2023085637 W EP 2023085637W WO 2025031608 A1 WO2025031608 A1 WO 2025031608A1
Authority
WO
WIPO (PCT)
Prior art keywords
characters
handwritten strokes
handwritten
word
suggested
Prior art date
Application number
PCT/EP2023/085637
Other languages
French (fr)
Inventor
Ho Chuen CHAN
Tin Lam
Tsun LEE
Kwan Yau LAU
Ho Ching FONG
Chun Yu KOK
Calvin Cheng
Juan CATALÁN
Juan ESTRELLA
Pak WONG
Weijie Cai
Original Assignee
Goodnotes Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goodnotes Limited filed Critical Goodnotes Limited
Publication of WO2025031608A1 publication Critical patent/WO2025031608A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/226Character recognition characterised by the type of writing of cursive writing
    • G06V30/2268Character recognition characterised by the type of writing of cursive writing using stroke segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"

Definitions

  • Embodiments of the present invention generally relate to systems and methods for spell checking and automatic completion of text that is handwritten on a computer device.
  • Handwritten text on a computer device using an electronic device or other input tools presents challenges in identifying the characters of the handwritten text that are not present when converting keystrokes to characters.
  • FIG. 1 illustrates an example user interface for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
  • FIG. 2 shows an example process for real-time assessment and completion of text that is handwritten into a computer device, in accordance with one embodiment.
  • FIG. 3 shows an example process for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
  • FIG.4 A shows an example user interface with handwritten strokes and indications of incorrectly spelled words represented by the handwritten strokes, in accordance with one embodiment.
  • FIG. 4B shows an example user interface with handwritten strokes and the suggestion of auto-completion for users to choose to complete the word, in accordance with one embodiment.
  • FIG. 4C shows an example user interface with handwritten strokes, indications of incorrectly spelled words represented by the handwritten strokes and the suggested correct word for users to choose and replace the misspelled word.
  • FIG. 5 is an example schematic diagram of one or more artificial intelligence models that may be used for assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
  • FIG. 6 is an example system for an enhanced assistant for assessment and correction and/or auto-completion that is handwritten using a device, in accordance with one embodiment.
  • FIG. 7 is a diagram illustrating an example of a computing system that may be used in implementing embodiments of the present disclosure.
  • aspects of the present disclosure involve systems, methods, and the like, for spell checking and automatic completion for text that is handwritten using a device.
  • Devices may allow users to input characters in a variety of ways, such as with keystrokes and with stylus strokes.
  • a keystroke e.g., using a keyboard
  • the keystroke is converted to a corresponding character, such as a letter, number, symbol, or punctuation mark.
  • a key is pressed on a keyboard, it is converted into a binary number that represents a character, so there is no ambiguity in determining which character a user typed with a keystroke.
  • a user handwrites text into a computer device with an electronic device, such as a stylus, or a user’s finger there are many variations in the handwriting that introduce ambiguity when determining what characters the handwriting represents.
  • handwriting can be in different fonts, so a cursive letter may look different than its block letter counterpart. Even two characters written using a same font by two different people may look different. Analyzing characters handwritten into a device, therefore, depends on the ability of the computer device to correctly identify the characters represented by the handwriting.
  • An electronic device encompasses a broad array of electronic gadgets, including tools such as a digital stylus or any comparable apparatus, which permit the user to sketch characters on a computer interface as a form of hand-drawn or handwritten input. Beyond the use of an electronic device for inputting strokes onto the computer device, users can also engage the intuitiveness of their own fingers as a dynamic and natural means to accomplish the same task, thus providing a more direct and tactile interaction with the digital interface.
  • electronic devices are primarily illustrated as examples, it should be understood that the scope of interaction is not limited to these alone.
  • a user’s finger also serves as a viable tool for interacting with computer devices.
  • the exemplification of an electronic device should not be misconstrued as a limitation, but rather, it serves as one among many possible methods for interaction in the broader digital landscape.
  • a computer device such as a laptop, tablet, or smartphone, can be described as a sophisticated system equipped with an interactive interface designed to accept and interpret strokes from an electronic device, recording these inputs as lines, characters, shapes, and more. This interaction transforms abstract human action into digitized elements.
  • correctly identifying the handwritten text is important to a computer device’s ability to assess the words represented by the handwritten text. If the computer device improperly identifies handwritten words, then the computer device may not correctly assess whether the spelling is correct and may not be able to provide suggestions for automatically completing the spelling of a word or sentence.
  • a computer device-based analysis of handwritten characters also must be able to process the characters identified from the handwritten inputs to the computer device, recognize that they represent words and sentences, determine whether the words are spelled correctly, and anticipate subsequent words that may be recommended to automatically complete a sentence without the user having to handwrite all of the characters in the sentence.
  • the list of supported languages for auto-completion includes, but is not limited to: English, German, French, Spanish, Portuguese, Italian, Dutch, Chinese, Japanese, Korean, Thai, Russian and Vietnamese.
  • the list of supported languages for Spellcheck includes, but is not limited to: English, German, French, Spanish, Portuguese, Italian, Dutch, Thai, Russian and Turkish.
  • Misspellings can impact the readability and overall quality of notes, discouraging the users from rereading or sharing with others. Misspellings could also reduce the accuracy of handwriting recognition and other artificial intelligence (Al) features that rely on the content’s accuracy. Recalling the right spelling for every word can slow down one’s transcription. Sometimes a user may handwrite a word with a slight mistake and continue writing despite the mistake. However, it can be time-consuming to go back, spot the errors, and correct them manually. Automated solutions to detect handwritten characters, spot the errors, and offer suggested corrections depend on a computer device’s ability to accurately identify handwritten characters.
  • correction and auto-completion of handwritten text using pre-configured characters may result in inconsistent handwriting that does not look like the user’s actual handwriting.
  • a computer device may receive handwritten strokes on a screen or touchpad, such as with an electronic device (e.g., a stylus) or a user’s finger, representing handwritten characters.
  • the computer device may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the computer device (e.g., compared to coordinates of known characters, whether the same characters written by the same user or otherwise).
  • the computer device may analyze them in realtime to identify spelling errors before the user completes their handwriting or requests the performance of a spell check.
  • the computer device may present in real-time an indication of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model for analysis and generation of suggested spellings/words for correction and auto-completion.
  • the computer device may synthesize the characters of the suggested characters to include handwriting features in terms of user’s hand writing style, and the features of the electronic device (e.g., stylus pen tool or otherwise) that the users has chosen for the particular handwriting.
  • the thickness, texture, color, etc. of the strokes (e.g., for handwriting synthesis) may be considered as features used to synthesize the handwriting of characters presented when selected for auto-completion.
  • the computer device may determine a confidence level in the recognized handwritten characters and in a suggested word represented by at least some of the recognized handwritten characters. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confident threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word but is misspelled. When both confidence scores exceed their thresholds, the computer device may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character).
  • the computer device may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings or for autocompletion (e.g., remaining characters in a handwritten word that have not yet been handwritten into the computer device). For example, when a first letter is handwritten into the computer device, the computer device may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the device can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters.
  • a suggested word may begin with “par,” but when the user’s next handwritten character is “pat,” the suggested word may update (e.g., to a word beginning with “pat,” such as “patent”).
  • a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting.
  • the computer device may analyze features of the handwritten strokes representing characters and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
  • the computer device may be presented via the computer device in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may select (e.g., using an electronic device or user’s finger) which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
  • the computer device may use machine learning (ML) for one or multiple aspects of the handwritten stroke analysis and correction.
  • ML machine learning
  • a machine learning model may be used to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device.
  • Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for autocompletion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
  • the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning.
  • few-shot learning computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters.
  • images of handwritten text may be used to identify similarities between the example images and the handwritten text inputs.
  • zero-shot learning a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters.
  • the handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters.
  • the training data may include many versions of characters handwritten individually and in combination with other letters.
  • One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
  • the above descriptions are for the purpose of illustration and are not meant to be limiting. Numerous other examples, configurations, processes, etc., may exist, some of which are described in greater detail below. Example embodiments will now be described with reference to the accompanying figures.
  • FIG. 1 illustrates an example user interface for real-time assessment and correction of text that is handwritten using a device, in accordance with one embodiment.
  • a computer device 102 may present a user interface 120.
  • the computer device 102 may be a laptop, tablet, smartphone, touchscreen, television, smart home assistant, VR/AR device, or the like, capable of presenting and receiving handwritten strokes.
  • the handwritten strokes entered on the computer device 102 may represent the characters “the imun,” which may be analyzed to detect incorrect spelling and suggested words for correction and/or auto-completion.
  • the computer device 102 may receive handwritten strokes on a screen or touchpad (e.g., corresponding to the user interface 120), such as with a stylus 122 (e.g., an electronic device) or a user’s finger, representing handwritten characters.
  • the computer device 102 and/or another remote device may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the computer device 102.
  • the computer device 102 may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests the performance of a spell check.
  • the computer device 102 may present in real-time an indication 130 of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model (see FIGs. 2 and 3) for analysis and generation of suggested spellings/words for correction and auto-completion.
  • the computer device 102 may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
  • the computer device 102 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word, but is misspelled. When both confidence scores exceed their thresholds (e.g., indicating that the handwriting represents certain characters, and that the characters represent a word), the device may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character).
  • the computer device 102 may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings or for auto-completion (e.g., remaining characters in a handwritten word that have not yet been handwritten into the computer device 102). For example, when a first letter is handwritten into the computer device 102, the computer device 102 may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the device can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters.
  • a suggested word may begin with “par,” but when the user’s next handwritten character is “pat,” the suggested word may update (e.g., to a word beginning with “pat,” such as “patent”).
  • a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting.
  • the computer device 102 may analyze features of the handwritten strokes representing characters, and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
  • Handwritten features may be represented by vector embeddings, for example, which may be generated by a language model or another type of AI/ML model trained to evaluate features of handwriting and quantify the features such that any entry in a vector embedding quantifies a respective handwritten feature.
  • vector embeddings may quantify height, thickness, width, curvature, etc. of various characters.
  • a character is selected for autocompletion, such as the character “a,” the character may be generated based on handwriting synthesis that uses the handwriting features so that the style of the “a” is presented similarly to other characters that the user has handwritten (e.g., another “a” or other characters based on font, height, width, thickness, etc.).
  • suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
  • the computer device 102 may use machine learning for one or multiple aspects of the handwritten stroke analysis and correction.
  • a machine learning model may be used to assess the handwritten strokes as inputs, and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device.
  • Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for autocompletion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
  • the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning.
  • few-shot learning computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters.
  • images of handwritten text may be used to identify similarities on the example images and the handwritten text inputs.
  • zero-shot learning a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters.
  • the handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters.
  • the training data may include many versions of characters handwritten individually and in combination with other letters.
  • One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
  • FIG. 2 shows an example process for real-time assessment and completion of text that is handwritten into a computer device, in accordance with one embodiment.
  • users of the computer device 102 of FIG. 1 may input (e.g., using an electronic device such as the stylus 122) handwritten strokes 202 to one or more user interfaces (e.g., the user interface 120 of FIG. 1) of an application running at least partially on the computer device 102.
  • the handwritten strokes 202 may be input with a finger, stylus, or another instrument/input device (e.g., Bennettger or an electronic device).
  • the handwritten strokes 202 may be input into one or more user interfaces of the application so that the application may detect them.
  • the handwritten strokes 202 may be provided to one or more handwriting recognition modules 204 for recognition and analysis.
  • the handwriting recognition modules 204 may convert the handwritten strokes 202 to characters (e.g., handwriting recognition - HWR).
  • the handwritten strokes 202 may have pixel coordinates where the user’s finger, stylus, or other handwriting input device touched the display (e.g., of the computer device 102).
  • the pixel coordinates e.g., X and Y coordinates of the display
  • detecting the handwritten characters differs from mapping a keyboard input to a character.
  • the conversion of handwritten strokes 202 to characters may use machine learning, such a model trained to detect characters based on similarities and/or differences with known handwritten characters (e.g., previously learned and/or trained with training data), including the pixel coordinates, and other features such as shape, size, and the like.
  • the characters may include numbers, letters, symbols, math constructs, functions, matrices, and the like.
  • a language model 206 may receive the identified characters as inputs for analysis. In this manner, the ability of the language model 206 to assess spelling and make recommendations for replacement and/or additional words for autocompletion may be based on the handwriting recognition modules’ 204 ability to correctly recognize the characters represented by the handwritten strokes 202.
  • the language model 206 may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests performance of a spell check. Recognized handwriting may be input to the language model 206 for analysis and generation of suggested words for auto-completion.
  • the language model 206 may determine whether the handwritten strokes 202 represent handwritten characters, and may determine whether the handwritten characters represent a word.
  • the language model may determine whether the handwritten characters likely represent one or more words, and whether the words are correctly spelled.
  • the computer device 102 may indicate 210 a spelling error (e.g., using underlining as shown in FIG. 2 or another indication described herein) and present a menu with the n most likely (and correctly spelled) words represented by the handwritten strokes (e.g., word 1, . . ., word n as shown in FIG. 2).
  • the selected suggested word 212 may be provided to handwriting synthesis modules 214 for customizing their handwritten presentation.
  • the handwriting synthesis modules 214 may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
  • the language model 206 may detect in real-time that “imm” is the start of words such as “immunity,” immune,” and other words, any of which may be presented to the user as suggested words to auto-complete the characters that the user is handwriting in real-time.
  • the language model 206 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent a suggested word (or at least a portion of the suggested word). A suggested word may be recognized even when not all characters of the word as entered are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character).
  • the language model 206 may recognize a subset of the characters in a word and still be able to generate and present suggested words for auto-completion/spelling correction (e.g., remaining characters in a handwritten word that have not yet been handwritten into the device). For example, when a first letter is handwritten with the device, the language model 206 may or may not be able to determine with sufficient confidence what the intended word is. The confidence levels may increase with the real-time writing of subsequent characters until the computer device can determine with sufficient confidence that the word is properly identified. In addition, suggested characters/words may change as a user handwrites subsequent characters.
  • a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting.
  • the replacement suggestions may include properly spelled words such as “immune,” available for selection by the user as replacements or subsequent words (e.g., to complete a sentence).
  • the selected word may be presented in the user interface in a handwritten style that is similar to the user’s handwritten strokes so as to appear consistent (e.g., as if the user handwrote the selected word).
  • the handwriting synthesis modules 214 may analyze features of the handwritten strokes representing characters, and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
  • suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
  • the computer device 102 may use machine learning for one or multiple aspects of the analysis and suggestions (e.g., corrected spelling and/or subsequent characters for auto-completion).
  • a machine learning model may be used by the handwriting recognition modules 204 to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device.
  • Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for auto-completion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
  • the handwriting synthesis modules 214 may use ML/ Al, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters.
  • the training data may include many versions of characters handwritten individually and in combination with other letters.
  • One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
  • FIG. 3 shows an example process for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
  • users of the computer device 102 of FIG. 1 may input (e.g., using an electronic device such as the stylus 122) handwritten strokes 302 to one or more user interfaces (e.g., the user interface 120 of FIG. 1) of an application running at least partially on the computer device 102.
  • the handwritten strokes 302 may be input with a finger, stylus, or another instrument/input device (e.g., Bennettger or an electronic device).
  • the handwritten strokes 302 may be input into one or more user interfaces of the application so that the application may detect them.
  • the handwritten strokes 302 may be provided to the one or more handwriting recognition modules 204 of FIG. 2 for recognition and analysis.
  • the one or more handwriting recognition modules 204 may convert the handwritten strokes 302 to characters (e.g., handwriting recognition - HWR).
  • the handwritten strokes 302 may have pixel coordinates where the user’s finger, stylus, or other handwriting input device (e.g., electronic device) touched the display.
  • the pixel coordinates e.g., X and Y coordinates of the display
  • detecting the handwritten characters differs from mapping a keyboard input to a character.
  • the conversion of handwritten strokes 302 to characters may use machine learning, such as a model trained to detect characters based on similarities and/or differences with known handwritten characters (e.g., previously learned and/or trained with training data), including the pixel coordinates, and other features such as shape, size, and the like.
  • the characters may include numbers, letters, symbols, math constructs, functions, matrices, and the like.
  • the language model 206 of FIG. 2 may receive the identified characters as inputs for analysis. In this manner, the ability of the language model 206 to assess spelling and make recommendations for replacement may be based on the handwriting recognition modules’ 204 ability to correctly recognize the characters represented by the handwritten strokes 302.
  • the computer device may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests the performance of a spell check.
  • the device may present in real-time an indication of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model for analysis and generation of suggested spellings/words for correction.
  • the computer device may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
  • the user handwrites the word “imun” instead of the correctly spelled word “immune.”
  • the handwriting recognition modules 204 may detect in real-time, or subsequently based on a user request for spell checking, that “imun” is not a proper spelling.
  • the computer device 102 may present an indication 304 (e.g., underline or otherwise) of the misspelled word.
  • the language model 206 may determine a number of words that may be intended by the misspelling, such as “immun,” “immunity,” etc., based on the characters that have been entered by the user.
  • the handwriting recognition modules 204 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word, but is misspelled. When both confidence scores exceed their thresholds, the computer device 102 may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character).
  • the handwriting recognition modules 204 may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings. For example, when a first letter is handwritten with the device, the handwriting recognition modules 204 or the language model 206 may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the handwriting recognition modules 204 or the language model 206 can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters.
  • one or more suggested words 308 may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting.
  • the replacement suggestions may include properly spelled words such as “immune,” available for selection by the user as replacements or subsequent words (e.g., to complete a sentence).
  • the selected word 310 may be provided to the handwriting synthesis modules 214 to synthesize 312 the selected word 301 for presentation in the user interface in a handwritten style that is similar to the user’s handwritten strokes so as to appear consistent (e.g., as if the user handwrote the selected word).
  • the handwriting synthesis modules 214 may analyze features of the handwritten strokes 302 representing characters and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
  • suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
  • the computer device 102 may use machine learning for one or multiple aspects of the analysis and suggestions.
  • a machine learning model may be used by the handwriting recognition modules 204 to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device 102.
  • Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for auto-completion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
  • the handwriting synthesis modules 214 may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters.
  • the training data may include many versions of characters handwritten individually and in combination with other letters.
  • One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
  • FIG. 4A shows an example user interface 402 with handwritten strokes and indications of incorrectly spelled words represented by the handwritten strokes, in accordance with one embodiment.
  • the computer device 102 of FIG. 1 may allow a user to enter handwritten strokes via the user interface 402.
  • handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words.
  • a language model e.g., the language model 206 of FIG. 2
  • FIG. 4B shows an example user interface 420 with handwritten strokes and the suggestion of auto-completion for users to choose to complete the word.
  • the computer device 102 may present the user interface 420 into which a user may enter handwritten strokes.
  • handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words.
  • the handwritten strokes may represent partial 422 words, such as “stan” as shown in FIG. 4B.
  • the language model 206 may identify possible words to auto-complete the remainder of the word beginning with “stan,” such as “standardization,” “Stanford,” and “standards,” which may be presented via the user interface 420 for user selection for auto-completion.
  • FIG. 4C shows an example user interface 440 with handwritten strokes, indications of incorrectly spelled words represented by the handwritten strokes and the suggested correct word for users to choose and replace the misspelled word.
  • the computer device 102 may present the user interface 440 into which a user may enter handwritten strokes.
  • the handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words and to allow a user to see suggested replacement words 442 that may replace the misspelled words when selected.
  • a language model e.g., the language model 206 of FIG. 2
  • the characters “discoureies” are handwritten, identified and indicated as misspelled, and the replacement words 442 may include “discourse,” “discourses,” and “discourse’s,” and options to ignore the identified misspelling or to provide other recommended words may be presented as shown.
  • “discoureies” may refer to “discoveries,” so more selecting “more” may be needed to show additional words for auto-completion until that word is presented as an option.
  • FIG. 5 is an example schematic diagram of one or more artificial intelligence models that may be used for assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
  • one or more artificial intelligence (Al) models 502 may be used for any of detecting the handwritten characters, determining that the handwritten characters represent characters, whether the characters represent a word (e.g., correctly or incorrectly spelled), and/or whether subsequent characters are likely to be entered by the user after the analyzed characters already input by the user.
  • the one or more Al models 502 may receive inputs, optionally may receive data 504 (e.g., training data, one- or few-shot examples, user feedback, etc.), and may generate outputs 508.
  • feedback 510 from the outputs 508 may be input into the one or more Al models 502, such as human-in-the-loop feedback, user feedback, comparisons of the outputs 508 to known outputs and their differences (e.g., used to adjust the one or more Al models 502, such as by adjusting weights for identifying characters, steps/lines, errors, etc.).
  • the one or more Al models 502 such as human-in-the-loop feedback, user feedback, comparisons of the outputs 508 to known outputs and their differences (e.g., used to adjust the one or more Al models 502, such as by adjusting weights for identifying characters, steps/lines, errors, etc.).
  • the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning.
  • few-shot learning computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters.
  • images of handwritten text may be used to identify similarities on the example images and the handwritten text inputs.
  • zero-shot learning a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters.
  • the inputs 506 may be the handwritten strokes and/or characteristics of the handwritten strokes, such as their pixel coordinates on the display with which they were input.
  • the data 504 may include features of characters, such as their coordinates, shapes, sizes, and the like, accounting for different fonts, such as cursive, block letters, etc.
  • the outputs 508 may include the characters identified from the handwritten strokes. The outputs 508 may be re-input to the one or more Al models 502 until the one or more Al models 502 determine that the confidence score assigned to the identified characters exceeds a threshold confidence. The closer the similarities between the inputs 506 and the known characters, for example, the higher the confidence score for identifying the characters.
  • the inputs 506 may include handwritten strokes as they are input in real-time.
  • the data 504 may include properly spelled words and confidence scores indicating the likelihood that certain combinations of characters that may be identified by the handwritten strokes may correspond to certain words, and/or the likelihood of certain words not yet entered following words that have already been entered by the handwritten strokes.
  • the outputs 508 may include suggested words for auto-completion.
  • the feedback 510 may include indications of user selections of auto-completions, which may be used to adjust the one or more Al models 502 (e.g., the confidence scores for the likelihoods).
  • the inputs 506 may include handwritten strokes as they are input in real-time.
  • the data 504 may include properly spelled words and confidence scores indicating the likelihood that characters input via the handwritten strokes represent a properly or improperly spelled word.
  • the outputs 508 may include suggested spellings for a word likely to be misspelled based on the handwritten strokes.
  • the feedback 510 may include indications of user selections of corrected words/spellings, which may be used to adjust the one or more Al models 502 (e.g., the confidence scores for the likelihoods).
  • FIG. 6 is an example system 600 for an enhanced assistant for assessment and correction and/or auto-completion that is handwritten using a computer device, in accordance with one embodiment.
  • the system 600 may include one or more devices 602 (e.g., laptops, desktops, smartphones, smart home assistants, wearable devices, televisions, or the like) capable of displaying text and receiving handwritten strokes (e.g., from a stylus 604, a finger of a user 606, or another input device).
  • the system 600 may include one or more remote devices 608 (e.g., servers, cloud-based devices, etc.).
  • the one or more devices 602 and/or the one or more remote devices 608 may execute applications that receive, analyze, and correct handwritten strokes input via the one or more devices 602.
  • the one or more devices 602 may transmit indications of the handwritten strokes and/or any analysis of the handwritten strokes to the one or more remote devices 608 (e.g., a front-end/back-end integration of the application).
  • the one or more devices 602 may analyze, detect errors, and correct the handwritten text locally.
  • the one or more devices 602 and/or the one or more remote devices 608 may include handwriting modules 610 (e.g., for receiving and detecting handwritten strokes, identifying the characters of the handwritten strokes), spelling and completion modules 612 (e.g., for detecting spelling errors and/or identifying corrected words and/or subsequent words for auto-completion), one or more user interface modules 614 (e.g., for generating the presentable data of the user interfaces shown in the figures, including the handwritten strokes, error indications, and/or hints), and Al models 616 (e.g., the one or more Al models 502 of FIG. 5).
  • any of the one or more devices 602 may receive handwritten strokes, analyze the handwritten strokes, detect errors, and correct the handwritten text locally.
  • the one or more devices 602 may receive handwritten strokes on a screen or touchpad, such as with the stylus 604 or a user’s finger, representing handwritten characters.
  • the handwriting modules 610 may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the one or more devices 602.
  • the spelling and completion modules 612 may assess the identified characters for spelling and/or auto-completion.
  • the analysis and indication of a spelling error and/or suggested auto-completion may occur in real-time so that the one or more devices 602 may notify the user of errors/suggestions prior to completing handwritten strokes.
  • the enhanced techniques herein differ from the way that a human operator, such as a teacher or other human instructor, would analyze and correct handwriting.
  • the one or more devices 602 and/or the one or more remote devices 608 may use machine learning (e.g., the Al models 616) for one or multiple aspects of the spelling analysis and correction and/or the auto-correction.
  • machine learning e.g., the Al models 616
  • a machine learning model may be used to assess the handwritten strokes as inputs, and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the device.
  • a language model may be used to identify words represented by characters, and whether those words are spelled correctly and/or whether subsequent words not already entered by the user are likely to follow the words entered by the user (e.g., are words that can be suggested for auto-completion).
  • the Al models 616 also may be used for handwriting synthesis.
  • the handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters.
  • the training data may include many versions of characters handwritten individually and in combination with other letters.
  • the Al models 616 may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
  • the error indication may be presented with hints for how to correct the error. For example, when the text in error, or its annotation (e.g., underline, highlight, different text color than the characters with no errors, etc.) is selected by the user via the device, the device may present suggestions for how to correct the error.
  • the text in error or its annotation (e.g., underline, highlight, different text color than the characters with no errors, etc.) is selected by the user via the device, the device may present suggestions for how to correct the error.
  • FIG. 7 is a diagram illustrating an example of a computing system 700 that may be used in implementing embodiments of the present disclosure.
  • FIG. 7 is a block diagram illustrating an example of a computing device or computer system 700 which may be used in implementing the embodiments of the components disclosed above.
  • the computing system 700 of FIG. 7 may represent at least a portion of the one or more devices 602 and/or the one or more remote devices 608 of FIG. 6, as discussed above.
  • the computer system (system) includes one or more processors 702-706.
  • Processors 702-706 may include one or more internal levels of cache (not shown) and a bus controller 722 or bus interface unit to direct interaction with the processor bus 712.
  • Processor bus 712 also known as the host bus or the front side bus, may be used to couple the processors 702-706 with the system interface 724.
  • System interface 724 may be connected to the processor bus 712 to interface other components of the system 700 with the processor bus 712.
  • system interface 724 may include a memory controller 718 for interfacing a main memory 716 with the processor bus 712.
  • the main memory 716 typically includes one or more memory cards and a control circuit (not shown).
  • System interface 724 may also include an input/output (I/O) interface 720 to interface one or more I/O bridges 725 or I/O devices with the processor bus 712.
  • I/O controllers and/or I/O devices may be connected with the I/O bus 726, such as I/O controller 728 and I/O device 730, as illustrated.
  • the system 700 may include one or more handwriting devices 719 (e.g., representing at least a portion of the handwriting modules 610, the spelling and completion modules 612, the user interface modules 614, and/or the Al models 616 of FIG. 6).
  • I/O device 730 may also include an input device (not shown), such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 702-706.
  • an input device such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 702-706.
  • cursor control such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 702-706 and for controlling cursor movement on the display device.
  • System 700 may include a dynamic storage device, referred to as main memory 716, or a random access memory (RAM) or other computer-readable devices coupled to the processor bus 712 for storing information and instructions to be executed by the processors 702-706.
  • Main memory 716 also may be used for storing temporary variables or other intermediate information during execution of instructions by the processors 702-706.
  • System 700 may include a read only memory (ROM) and/or other static storage device coupled to the processor bus 712 for storing static information and instructions for the processors 702-706.
  • ROM read only memory
  • FIG. 7 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
  • the above techniques may be performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 716. These instructions may be read into main memory 716 from another machine-readable medium, such as a storage device. Execution of the sequences of instructions contained in main memory 716 may cause processors 702-706 to perform the process steps described herein. In alternative embodiments, circuitry may be used in place of or in combination with the software instructions. Thus, embodiments of the present disclosure may include both hardware and software components.
  • a machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
  • Such media may take the form of, but is not limited to, non-volatile media and volatile media and may include removable data storage media, non-removable data storage media, and/or external storage devices made available via a wired or wireless network architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components.
  • removable data storage media include Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc Read-Only Memory (DVD- ROM), magneto-optical disks, flash drives, and the like.
  • non-removable data storage media examples include internal magnetic hard disks, SSDs, and the like.
  • the one or more memory devices 706 may include volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and/or non-volatile memory (e.g., readonly memory (ROM), flash memory, etc.).
  • volatile memory e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.
  • non-volatile memory e.g., readonly memory (ROM), flash memory, etc.
  • Machine-readable media may include any tangible non-transitory medium that is capable of storing or encoding instructions to perform any one or more of the operations of the present disclosure for execution by a machine or that is capable of storing or encoding data structures and/or modules utilized by or associated with such instructions.
  • Machine-readable media may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more executable instructions or data structures.
  • Example 1 may include a method for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, the method comprising: receiving, by at least one processor of a device, first handwritten strokes entered on the device by a user; identifying, by the at least one processor, characters represented by the first handwritten strokes; inputting, by the at least one processor, the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating, by the at least one processor, an indication of the suggested word; presenting, by the at least one processor, on the device, the indication of the suggested word; receiving, by the at least one processor, a user selection of the indication; and presenting, by the at least one processor, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the
  • Example 2 may include the method of example 1 and/or any other example herein, further comprising determining that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
  • Example 3 may include the method of example 1 and/or any other example herein, further comprising determining that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion additional handwritten strokes not yet entered on the device by the user.
  • Example 4 may include the method of example 3 and/or any other example herein, further comprising generating, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein presenting the second handwritten strokes comprises presenting the suggested word and the second suggested word.
  • Example 5 may include the method of example 1 and/or any other example herein, wherein generating the suggested word comprises: determining, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determining that the confidence score exceeds a confidence score level.
  • Example 6 may include the method of example 1 and/or any other example herein, further comprising: determining, by the machine learning model, a first confidence score that the characters represent a first word; determining that the first confidence score is less than a confidence score threshold; receiving third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identifying second characters represented by the third handwritten strokes; determining, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determining that the second confidence score exceeds the confidence score threshold.
  • Example 7 may include the method of example 1 and/or any other example herein, further comprising: receiving feedback for the machine learning model based on the user selection; and adjusting a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
  • Example 8 may include the method of example 1 and/or any other example herein, further comprising: generating, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
  • Example 9 may include the method of example 1 and/or any other example herein, wherein identifying the characters comprises: determining, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determining that the confidence score exceeds a confidence score threshold.
  • Example 10 may include a system for presenting suggested handwritten characters with a device based on handwritten characters entered on the device the system comprising memory coupled to at least one processor of an edge gateway backend system, the at least one processor configured to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
  • Example 11 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word
  • Example 12 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
  • Example 13 may include the system of example 12 and/or any other example herein, wherein the at least one processor is further configured to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
  • Example 14 may include the system of example 10 and/or any other example herein, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
  • Example 15 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
  • Example 16 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
  • Example 17 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
  • Example 18 may include the system of example 10 and/or any other example herein, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
  • Example 19 may include a computer-readable storage medium comprising instructions to cause at least one processor for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, upon execution of the instructions by the at least one processor, to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
  • Example 20 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
  • Example 21 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
  • Example 22 may include the computer-readable storage medium of example 21 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
  • Example 23 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
  • Example 24 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
  • Example 25 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
  • Example 26 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
  • Example 27 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
  • Example 28 may include an apparatus comprising means for: receiving first handwritten strokes entered on the device by a user; identifying characters represented by the first handwritten strokes; inputting the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating an indication of the suggested word; presenting the indication of the suggested word; receiving a user selection of the indication; and presenting, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
  • Embodiments of the present disclosure include various steps, which are described in this specification. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or specialpurpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software and/or firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Character Discrimination (AREA)

Abstract

This disclosure describes systems, methods, and devices for presenting suggested handwritten characters with a device based on handwritten characters entered on the device. A method may include: receiving, by a device, first handwritten strokes entered on the device by a user; identifying characters represented by the first handwritten strokes; inputting, by the at least one processor, the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating an indication of the suggested word; presenting, on the device, the indication of the suggested word; receiving a user selection of the indication; and presenting on the device, based on the user selection and a style of the first handwritten strokes, the first and second handwritten strokes.

Description

ENHANCED SPELL CHECKING AND AUTO-COMPLETION FOR TEXT THAT IS HANDWRITTEN ON A COMPUTER DEVICE
CROSS-REFERENCE TO RELATED APPLICATION
This application is related to and claims priority under 35 U.S.C. § 119(e) from U.S. Patent Application No. 63/531,380, filed August 8, 2023, titled “ENHANCED SPELL CHECKING AND AUTO-COMPLETION FOR TEXT THAT IS HANDWRITTEN ON A COMPUTER DEVICE,” the entire content of which is incorporated herein by reference for all purposes.
TECHNICAL FIELD
Embodiments of the present invention generally relate to systems and methods for spell checking and automatic completion of text that is handwritten on a computer device.
BACKGROUND
Devices may allow users to handwrite text rather than enter text using keystrokes. Handwritten text on a computer device using an electronic device or other input tools presents challenges in identifying the characters of the handwritten text that are not present when converting keystrokes to characters.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an example user interface for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
FIG. 2 shows an example process for real-time assessment and completion of text that is handwritten into a computer device, in accordance with one embodiment.
FIG. 3 shows an example process for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
FIG.4 A shows an example user interface with handwritten strokes and indications of incorrectly spelled words represented by the handwritten strokes, in accordance with one embodiment.
FIG. 4B shows an example user interface with handwritten strokes and the suggestion of auto-completion for users to choose to complete the word, in accordance with one embodiment. FIG. 4C shows an example user interface with handwritten strokes, indications of incorrectly spelled words represented by the handwritten strokes and the suggested correct word for users to choose and replace the misspelled word.
FIG. 5 is an example schematic diagram of one or more artificial intelligence models that may be used for assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
FIG. 6 is an example system for an enhanced assistant for assessment and correction and/or auto-completion that is handwritten using a device, in accordance with one embodiment.
FIG. 7 is a diagram illustrating an example of a computing system that may be used in implementing embodiments of the present disclosure.
Certain implementations will now be described more fully below with reference to the accompanying drawings, in which various implementations and/or aspects are shown. However, various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein; rather, these implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Like numbers in the figures refer to like elements throughout. Hence, if a feature is used across several drawings, the number used to identify the feature in the drawing where the feature first appeared will be used in later drawings.
DETAILED DESCRIPTION
Aspects of the present disclosure involve systems, methods, and the like, for spell checking and automatic completion for text that is handwritten using a device.
Devices may allow users to input characters in a variety of ways, such as with keystrokes and with stylus strokes. When a user enters a keystroke (e.g., using a keyboard), the keystroke is converted to a corresponding character, such as a letter, number, symbol, or punctuation mark. When a key is pressed on a keyboard, it is converted into a binary number that represents a character, so there is no ambiguity in determining which character a user typed with a keystroke. In contrast, when a user handwrites text into a computer device with an electronic device, such as a stylus, or a user’s finger, there are many variations in the handwriting that introduce ambiguity when determining what characters the handwriting represents. For example, handwriting can be in different fonts, so a cursive letter may look different than its block letter counterpart. Even two characters written using a same font by two different people may look different. Analyzing characters handwritten into a device, therefore, depends on the ability of the computer device to correctly identify the characters represented by the handwriting.
Humans may identify and categorize handwritten characters after seeing only a few examples, but a machine’s ability to identify and categorize handwritten characters may require significantly more examples to train. An electronic device encompasses a broad array of electronic gadgets, including tools such as a digital stylus or any comparable apparatus, which permit the user to sketch characters on a computer interface as a form of hand-drawn or handwritten input. Beyond the use of an electronic device for inputting strokes onto the computer device, users can also engage the intuitiveness of their own fingers as a dynamic and natural means to accomplish the same task, thus providing a more direct and tactile interaction with the digital interface. Throughout this disclosure, while electronic devices are primarily illustrated as examples, it should be understood that the scope of interaction is not limited to these alone. A user’s finger also serves as a viable tool for interacting with computer devices. Hence, the exemplification of an electronic device should not be misconstrued as a limitation, but rather, it serves as one among many possible methods for interaction in the broader digital landscape. A computer device, such as a laptop, tablet, or smartphone, can be described as a sophisticated system equipped with an interactive interface designed to accept and interpret strokes from an electronic device, recording these inputs as lines, characters, shapes, and more. This interaction transforms abstract human action into digitized elements.
To allow a computer device to analyze characters handwritten into the computer device, correctly identifying the handwritten text is important to a computer device’s ability to assess the words represented by the handwritten text. If the computer device improperly identifies handwritten words, then the computer device may not correctly assess whether the spelling is correct and may not be able to provide suggestions for automatically completing the spelling of a word or sentence.
A computer device-based analysis of handwritten characters also must be able to process the characters identified from the handwritten inputs to the computer device, recognize that they represent words and sentences, determine whether the words are spelled correctly, and anticipate subsequent words that may be recommended to automatically complete a sentence without the user having to handwrite all of the characters in the sentence. The list of supported languages for auto-completion includes, but is not limited to: English, German, French, Spanish, Portuguese, Italian, Dutch, Chinese, Japanese, Korean, Thai, Russian and Turkish. The list of supported languages for Spellcheck includes, but is not limited to: English, German, French, Spanish, Portuguese, Italian, Dutch, Thai, Russian and Turkish.
Writing can lead to spelling errors or messy handwriting, especially for non-native speakers and complex words. Misspellings can impact the readability and overall quality of notes, discouraging the users from rereading or sharing with others. Misspellings could also reduce the accuracy of handwriting recognition and other artificial intelligence (Al) features that rely on the content’s accuracy. Recalling the right spelling for every word can slow down one’s transcription. Sometimes a user may handwrite a word with a slight mistake and continue writing despite the mistake. However, it can be time-consuming to go back, spot the errors, and correct them manually. Automated solutions to detect handwritten characters, spot the errors, and offer suggested corrections depend on a computer device’s ability to accurately identify handwritten characters.
In addition, correction and auto-completion of handwritten text using pre-configured characters may result in inconsistent handwriting that does not look like the user’s actual handwriting.
There is therefore a need for enhanced computer device-based assistance for words that are handwritten onto the computer device.
In one or more embodiments, a computer device may receive handwritten strokes on a screen or touchpad, such as with an electronic device (e.g., a stylus) or a user’s finger, representing handwritten characters. The computer device may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the computer device (e.g., compared to coordinates of known characters, whether the same characters written by the same user or otherwise). When the handwritten characters have been identified, the computer device may analyze them in realtime to identify spelling errors before the user completes their handwriting or requests the performance of a spell check. When a handwritten word is misspelled on a computer device, the computer device may present in real-time an indication of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model for analysis and generation of suggested spellings/words for correction and auto-completion. When a user selects a suggested spelling/words for correction and autocompletion, the computer device may synthesize the characters of the suggested characters to include handwriting features in terms of user’s hand writing style, and the features of the electronic device (e.g., stylus pen tool or otherwise) that the users has chosen for the particular handwriting. For example, the thickness, texture, color, etc. of the strokes (e.g., for handwriting synthesis) may be considered as features used to synthesize the handwriting of characters presented when selected for auto-completion.
In one or more embodiments, to detect a misspelling, the computer device may determine a confidence level in the recognized handwritten characters and in a suggested word represented by at least some of the recognized handwritten characters. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confident threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word but is misspelled. When both confidence scores exceed their thresholds, the computer device may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character). The computer device may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings or for autocompletion (e.g., remaining characters in a handwritten word that have not yet been handwritten into the computer device). For example, when a first letter is handwritten into the computer device, the computer device may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the device can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters. For example, when a user has handwritten the letters “pa,” a suggested word may begin with “par,” but when the user’s next handwritten character is “pat,” the suggested word may update (e.g., to a word beginning with “pat,” such as “patent”).
In one or more embodiments, a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting. The computer device may analyze features of the handwritten strokes representing characters and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added). In one or more embodiments, suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may select (e.g., using an electronic device or user’s finger) which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
In one or more embodiments, the computer device may use machine learning (ML) for one or multiple aspects of the handwritten stroke analysis and correction. For example, a machine learning model may be used to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device. Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for autocompletion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
In one or more embodiments, the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning. In few-shot learning, computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters. In one-shot learning, images of handwritten text may be used to identify similarities between the example images and the handwritten text inputs. In zero-shot learning, a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters.
In one or more embodiments, the handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters. For example, the training data may include many versions of characters handwritten individually and in combination with other letters. One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion. The above descriptions are for the purpose of illustration and are not meant to be limiting. Numerous other examples, configurations, processes, etc., may exist, some of which are described in greater detail below. Example embodiments will now be described with reference to the accompanying figures.
FIG. 1 illustrates an example user interface for real-time assessment and correction of text that is handwritten using a device, in accordance with one embodiment.
Referring to FIG. 1, a computer device 102 may present a user interface 120. The computer device 102 may be a laptop, tablet, smartphone, touchscreen, television, smart home assistant, VR/AR device, or the like, capable of presenting and receiving handwritten strokes. As shown in FIG. 1, the handwritten strokes entered on the computer device 102 (e.g., entered via the user interface 120) may represent the characters “the imun,” which may be analyzed to detect incorrect spelling and suggested words for correction and/or auto-completion.
In one or more embodiments, the computer device 102 may receive handwritten strokes on a screen or touchpad (e.g., corresponding to the user interface 120), such as with a stylus 122 (e.g., an electronic device) or a user’s finger, representing handwritten characters. The computer device 102 and/or another remote device (see FIG. 6) may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the computer device 102. When the handwritten characters have been identified, the computer device 102 may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests the performance of a spell check. When a handwritten word is misspelled on the computer device 102, the computer device 102 may present in real-time an indication 130 of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model (see FIGs. 2 and 3) for analysis and generation of suggested spellings/words for correction and auto-completion. When a user selects a suggested spelling/words for correction and autocompletion (see FIGs. 2 and 3), the computer device 102 may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
In one or more embodiments, to detect a misspelling, the computer device 102 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word, but is misspelled. When both confidence scores exceed their thresholds (e.g., indicating that the handwriting represents certain characters, and that the characters represent a word), the device may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character).
In one or more embodiments, the computer device 102 may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings or for auto-completion (e.g., remaining characters in a handwritten word that have not yet been handwritten into the computer device 102). For example, when a first letter is handwritten into the computer device 102, the computer device 102 may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the device can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters. For example, when a user has handwritten the letters “pa,” a suggested word may begin with “par,” but when the user’s next handwritten character is “pat,” the suggested word may update (e.g., to a word beginning with “pat,” such as “patent”).
In one or more embodiments, a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting. The computer device 102 may analyze features of the handwritten strokes representing characters, and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added). Handwritten features may be represented by vector embeddings, for example, which may be generated by a language model or another type of AI/ML model trained to evaluate features of handwriting and quantify the features such that any entry in a vector embedding quantifies a respective handwritten feature. For example, vector embeddings may quantify height, thickness, width, curvature, etc. of various characters. When a character is selected for autocompletion, such as the character “a,” the character may be generated based on handwriting synthesis that uses the handwriting features so that the style of the “a” is presented similarly to other characters that the user has handwritten (e.g., another “a” or other characters based on font, height, width, thickness, etc.).
In one or more embodiments, suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
In one or more embodiments, the computer device 102 may use machine learning for one or multiple aspects of the handwritten stroke analysis and correction. For example, a machine learning model may be used to assess the handwritten strokes as inputs, and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device. Another machine learning model may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for autocompletion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
In one or more embodiments, the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning. In few-shot learning, computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters. In one-shot learning, images of handwritten text may be used to identify similarities on the example images and the handwritten text inputs. In zero-shot learning, a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters.
In one or more embodiments, the handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters. For example, the training data may include many versions of characters handwritten individually and in combination with other letters. One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
FIG. 2 shows an example process for real-time assessment and completion of text that is handwritten into a computer device, in accordance with one embodiment.
Referring to FIG. 2, users of the computer device 102 of FIG. 1 may input (e.g., using an electronic device such as the stylus 122) handwritten strokes 202 to one or more user interfaces (e.g., the user interface 120 of FIG. 1) of an application running at least partially on the computer device 102. The handwritten strokes 202 may be input with a finger, stylus, or another instrument/input device (e.g., afinger or an electronic device). The handwritten strokes 202 may be input into one or more user interfaces of the application so that the application may detect them. The handwritten strokes 202 may be provided to one or more handwriting recognition modules 204 for recognition and analysis.
The handwriting recognition modules 204 may convert the handwritten strokes 202 to characters (e.g., handwriting recognition - HWR). The handwritten strokes 202 may have pixel coordinates where the user’s finger, stylus, or other handwriting input device touched the display (e.g., of the computer device 102). The pixel coordinates (e.g., X and Y coordinates of the display) may correspond to characters. In this manner, detecting the handwritten characters differs from mapping a keyboard input to a character. The conversion of handwritten strokes 202 to characters may use machine learning, such a model trained to detect characters based on similarities and/or differences with known handwritten characters (e.g., previously learned and/or trained with training data), including the pixel coordinates, and other features such as shape, size, and the like. The characters may include numbers, letters, symbols, math constructs, functions, matrices, and the like.
A language model 206 (e.g., a large language model) may receive the identified characters as inputs for analysis. In this manner, the ability of the language model 206 to assess spelling and make recommendations for replacement and/or additional words for autocompletion may be based on the handwriting recognition modules’ 204 ability to correctly recognize the characters represented by the handwritten strokes 202. When the handwritten characters have been identified, the language model 206 may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests performance of a spell check. Recognized handwriting may be input to the language model 206 for analysis and generation of suggested words for auto-completion. The language model 206 may determine whether the handwritten strokes 202 represent handwritten characters, and may determine whether the handwritten characters represent a word. Using confidence scoring 208, the language model may determine whether the handwritten characters likely represent one or more words, and whether the words are correctly spelled. When the words are not correctly spelled (e.g., the confidence score indicates that the word likely represented by the handwritten strokes 202 is not spelled correctly by the handwritten strokes 202), the computer device 102 may indicate 210 a spelling error (e.g., using underlining as shown in FIG. 2 or another indication described herein) and present a menu with the n most likely (and correctly spelled) words represented by the handwritten strokes (e.g., word 1, . . ., word n as shown in FIG. 2).
When a user selects one of the suggested words (e.g., with a touch of the word on the display), the selected suggested word 212 may be provided to handwriting synthesis modules 214 for customizing their handwritten presentation. When a user selects a suggested word for auto-completion, the handwriting synthesis modules 214 may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
In the example shown in FIG. 2, the user handwrites the word “imm,” the language model 206 may detect in real-time that “imm” is the start of words such as “immunity,” immune,” and other words, any of which may be presented to the user as suggested words to auto-complete the characters that the user is handwriting in real-time.
In one or more embodiments, to identify suggested words for auto-completion, the language model 206 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent a suggested word (or at least a portion of the suggested word). A suggested word may be recognized even when not all characters of the word as entered are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character). The language model 206 may recognize a subset of the characters in a word and still be able to generate and present suggested words for auto-completion/spelling correction (e.g., remaining characters in a handwritten word that have not yet been handwritten into the device). For example, when a first letter is handwritten with the device, the language model 206 may or may not be able to determine with sufficient confidence what the intended word is. The confidence levels may increase with the real-time writing of subsequent characters until the computer device can determine with sufficient confidence that the word is properly identified. In addition, suggested characters/words may change as a user handwrites subsequent characters.
In one or more embodiments, a suggested word may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting. In the example shown in FIG. 2, the replacement suggestions may include properly spelled words such as “immune,” available for selection by the user as replacements or subsequent words (e.g., to complete a sentence). When a suggested word is selected by the user for replacement or addition, the selected word may be presented in the user interface in a handwritten style that is similar to the user’s handwritten strokes so as to appear consistent (e.g., as if the user handwrote the selected word).
The handwriting synthesis modules 214 may analyze features of the handwritten strokes representing characters, and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
In one or more embodiments, suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
In one or more embodiments, the computer device 102 may use machine learning for one or multiple aspects of the analysis and suggestions (e.g., corrected spelling and/or subsequent characters for auto-completion). For example, a machine learning model may be used by the handwriting recognition modules 204 to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device. Another machine learning model (e.g., the language model 206) may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for auto-completion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes). In one or more embodiments, the handwriting synthesis modules 214 may use ML/ Al, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters. For example, the training data may include many versions of characters handwritten individually and in combination with other letters. One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
FIG. 3 shows an example process for real-time assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
Referring to FIG. 3, users of the computer device 102 of FIG. 1 may input (e.g., using an electronic device such as the stylus 122) handwritten strokes 302 to one or more user interfaces (e.g., the user interface 120 of FIG. 1) of an application running at least partially on the computer device 102. The handwritten strokes 302 may be input with a finger, stylus, or another instrument/input device (e.g., afinger or an electronic device). The handwritten strokes 302 may be input into one or more user interfaces of the application so that the application may detect them. The handwritten strokes 302 may be provided to the one or more handwriting recognition modules 204 of FIG. 2 for recognition and analysis. The one or more handwriting recognition modules 204 may convert the handwritten strokes 302 to characters (e.g., handwriting recognition - HWR). The handwritten strokes 302 may have pixel coordinates where the user’s finger, stylus, or other handwriting input device (e.g., electronic device) touched the display. The pixel coordinates (e.g., X and Y coordinates of the display) may correspond to characters. In this manner, detecting the handwritten characters differs from mapping a keyboard input to a character. The conversion of handwritten strokes 302 to characters may use machine learning, such as a model trained to detect characters based on similarities and/or differences with known handwritten characters (e.g., previously learned and/or trained with training data), including the pixel coordinates, and other features such as shape, size, and the like. The characters may include numbers, letters, symbols, math constructs, functions, matrices, and the like.
The language model 206 of FIG. 2 (e.g., a large language model) may receive the identified characters as inputs for analysis. In this manner, the ability of the language model 206 to assess spelling and make recommendations for replacement may be based on the handwriting recognition modules’ 204 ability to correctly recognize the characters represented by the handwritten strokes 302. When the handwritten characters have been identified, the computer device may analyze them in real-time to identify spelling errors before the user completes their handwriting or requests the performance of a spell check. When a handwritten word is misspelled on a device, the device may present in real-time an indication of the misspelling, such as with an underline, highlight, or another annotation. Recognized handwriting may be input to a language model for analysis and generation of suggested spellings/words for correction. When a user selects a suggested spelling/words for correction, the computer device may synthesize the characters of the suggested characters to include handwriting features of other handwritten strokes of the user (e.g., for handwriting synthesis).
In the example shown in FIG. 3, the user handwrites the word “imun” instead of the correctly spelled word “immune.” The handwriting recognition modules 204 may detect in real-time, or subsequently based on a user request for spell checking, that “imun” is not a proper spelling. When a misspelled word is detected, the computer device 102 may present an indication 304 (e.g., underline or otherwise) of the misspelled word. When a user taps or otherwise selects the misspelled word with the indication 304 of the misspelling, the language model 206 may determine a number of words that may be intended by the misspelling, such as “immun,” “immunity,” etc., based on the characters that have been entered by the user.
In one or more embodiments, to detect a misspelling, the handwriting recognition modules 204 may determine a confidence level in the recognized handwritten characters and in a suggested word. If the confidence score of the recognized text exceeds a confidence threshold for representing certain characters, such may indicate that the recognized text is likely to represent a particular identified set of characters. If the confidence score of the recognized text exceeds another confidence threshold for representing a particular word, such may indicate that the recognized text is likely to represent the word, but is misspelled. When both confidence scores exceed their thresholds, the computer device 102 may trigger a spellcheck. A word may be recognized even when not all characters of the word are recognizable (e.g., not all characters have a confidence level exceeding a threshold indicative of whether the identified character is likely to be that character). The handwriting recognition modules 204 may recognize a subset of the characters in a word and still be able to generate and present suggested words either to correct misspellings. For example, when a first letter is handwritten with the device, the handwriting recognition modules 204 or the language model 206 may not be able to determine with sufficient confidence what the intended word is and whether it is spelled correctly. The confidence levels may increase with the real-time writing of subsequent characters until the handwriting recognition modules 204 or the language model 206 can determine with sufficient confidence that the word is properly identified and/or spelled correctly. In addition, suggested characters/words may change as a user handwrites subsequent characters.
In one or more embodiments, one or more suggested words 308 (e.g., word 1, . . ., word n) may be presented for replacement (e.g., to correct a misspelling) or subsequent characters (e.g., auto-completion) in a manner that represents a person’s handwriting. In the examples shown in FIGs. 2 and 3, the replacement suggestions may include properly spelled words such as “immune,” available for selection by the user as replacements or subsequent words (e.g., to complete a sentence). When a suggested word is selected 310 by the user for replacement or addition, the selected word 310 may be provided to the handwriting synthesis modules 214 to synthesize 312 the selected word 301 for presentation in the user interface in a handwritten style that is similar to the user’s handwritten strokes so as to appear consistent (e.g., as if the user handwrote the selected word).
The handwriting synthesis modules 214 may analyze features of the handwritten strokes 302 representing characters and may customize the presentation of the handwritten letters used in a correction or auto-completion of the characters so that the characters are presented with similar handwriting features to the rest of the user’s handwritten characters (e.g., without having to modify the handwritten characters that are not being added).
In one or more embodiments, suggestions and hints regarding possible words, spellings, etc. may be presented via the computer device 102 in real-time while the user is entering handwritten strokes, or the user may deactivate real-time detection and suggestions until they are ready to request editing and suggestions. Users may choose which words to skip or edit in the analysis and may add any words to a list of words considered to be spelled correctly.
In one or more embodiments, the computer device 102 may use machine learning for one or multiple aspects of the analysis and suggestions. For example, a machine learning model may be used by the handwriting recognition modules 204 to assess the handwritten strokes as inputs and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the computer device 102. Another machine learning model (e.g., the language model 206) may use the recognized characters of handwritten strokes to predict likely words used to correct a misspelling represented by the characters, and/or to predict likely letters/words for auto-completion (e.g., to fill in remaining characters of a word that for which the user has not yet completed all handwritten strokes and/or to automatically present subsequent words likely to follow the user’s current handwritten strokes).
In one or more embodiments, the handwriting synthesis modules 214 may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters. For example, the training data may include many versions of characters handwritten individually and in combination with other letters. One or more AI/ML models may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
FIG. 4A shows an example user interface 402 with handwritten strokes and indications of incorrectly spelled words represented by the handwritten strokes, in accordance with one embodiment.
Referring to FIG. 4A, the computer device 102 of FIG. 1 may allow a user to enter handwritten strokes via the user interface 402. When the handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words.
FIG. 4B shows an example user interface 420 with handwritten strokes and the suggestion of auto-completion for users to choose to complete the word.
Referring to FIG. 4B, the computer device 102 may present the user interface 420 into which a user may enter handwritten strokes. When the handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words. The handwritten strokes may represent partial 422 words, such as “stan” as shown in FIG. 4B. The language model 206 may identify possible words to auto-complete the remainder of the word beginning with “stan,” such as “standardization,” “Stanford,” and “standards,” which may be presented via the user interface 420 for user selection for auto-completion.
FIG. 4C shows an example user interface 440 with handwritten strokes, indications of incorrectly spelled words represented by the handwritten strokes and the suggested correct word for users to choose and replace the misspelled word. Referring to FIG. 4C, the computer device 102 may present the user interface 440 into which a user may enter handwritten strokes. When the handwritten strokes are converted to characters analyzed by a language model (e.g., the language model 206 of FIG. 2), and when the characters represent misspelled words, an indicator such as an underline may be presented via the user interface to identify possibly misspelled words and to allow a user to see suggested replacement words 442 that may replace the misspelled words when selected. In the example of FIG. 4C, the characters “discoureies” are handwritten, identified and indicated as misspelled, and the replacement words 442 may include “discourse,” “discourses,” and “discourse’s,” and options to ignore the identified misspelling or to provide other recommended words may be presented as shown. For example, “discoureies” may refer to “discoveries,” so more selecting “more” may be needed to show additional words for auto-completion until that word is presented as an option.
FIG. 5 is an example schematic diagram of one or more artificial intelligence models that may be used for assessment and correction of text that is handwritten into a computer device, in accordance with one embodiment.
Referring to FIG. 5, one or more artificial intelligence (Al) models 502 (or machine learning models) may be used for any of detecting the handwritten characters, determining that the handwritten characters represent characters, whether the characters represent a word (e.g., correctly or incorrectly spelled), and/or whether subsequent characters are likely to be entered by the user after the analyzed characters already input by the user. The one or more Al models 502 may receive inputs, optionally may receive data 504 (e.g., training data, one- or few-shot examples, user feedback, etc.), and may generate outputs 508. Optionally, feedback 510 from the outputs 508 may be input into the one or more Al models 502, such as human-in-the-loop feedback, user feedback, comparisons of the outputs 508 to known outputs and their differences (e.g., used to adjust the one or more Al models 502, such as by adjusting weights for identifying characters, steps/lines, errors, etc.).
In one or more embodiments, the text identification of handwritten characters may use few-shot learning, one-shot learning, or no-shot learning. In few-shot learning, computer vision and/or natural language processing may be used to recognize, parse, and classify handwritten characters. In one-shot learning, images of handwritten text may be used to identify similarities on the example images and the handwritten text inputs. In zero-shot learning, a machine learning model may not need to be trained, but instead learns the ability to predict handwritten characters. In one or more embodiments, when the one or more Al models 502 are used to detect handwritten characters, the inputs 506 may be the handwritten strokes and/or characteristics of the handwritten strokes, such as their pixel coordinates on the display with which they were input. The data 504 may include features of characters, such as their coordinates, shapes, sizes, and the like, accounting for different fonts, such as cursive, block letters, etc. The outputs 508 may include the characters identified from the handwritten strokes. The outputs 508 may be re-input to the one or more Al models 502 until the one or more Al models 502 determine that the confidence score assigned to the identified characters exceeds a threshold confidence. The closer the similarities between the inputs 506 and the known characters, for example, the higher the confidence score for identifying the characters.
In one or more embodiments, when the one or more Al models 502 are used for autocompletion, the inputs 506 may include handwritten strokes as they are input in real-time. The data 504 may include properly spelled words and confidence scores indicating the likelihood that certain combinations of characters that may be identified by the handwritten strokes may correspond to certain words, and/or the likelihood of certain words not yet entered following words that have already been entered by the handwritten strokes. The outputs 508 may include suggested words for auto-completion. The feedback 510 may include indications of user selections of auto-completions, which may be used to adjust the one or more Al models 502 (e.g., the confidence scores for the likelihoods).
In one or more embodiments, when the one or more Al models 502 are used for spelling analysis, the inputs 506 may include handwritten strokes as they are input in real-time. The data 504 may include properly spelled words and confidence scores indicating the likelihood that characters input via the handwritten strokes represent a properly or improperly spelled word. The outputs 508 may include suggested spellings for a word likely to be misspelled based on the handwritten strokes. The feedback 510 may include indications of user selections of corrected words/spellings, which may be used to adjust the one or more Al models 502 (e.g., the confidence scores for the likelihoods).
FIG. 6 is an example system 600 for an enhanced assistant for assessment and correction and/or auto-completion that is handwritten using a computer device, in accordance with one embodiment.
Referring to FIG. 6, the system 600 may include one or more devices 602 (e.g., laptops, desktops, smartphones, smart home assistants, wearable devices, televisions, or the like) capable of displaying text and receiving handwritten strokes (e.g., from a stylus 604, a finger of a user 606, or another input device). The system 600 may include one or more remote devices 608 (e.g., servers, cloud-based devices, etc.). The one or more devices 602 and/or the one or more remote devices 608 may execute applications that receive, analyze, and correct handwritten strokes input via the one or more devices 602. For example, the one or more devices 602 may transmit indications of the handwritten strokes and/or any analysis of the handwritten strokes to the one or more remote devices 608 (e.g., a front-end/back-end integration of the application). Alternatively, the one or more devices 602 may analyze, detect errors, and correct the handwritten text locally.
Still referring to FIG. 6, the one or more devices 602 and/or the one or more remote devices 608 may include handwriting modules 610 (e.g., for receiving and detecting handwritten strokes, identifying the characters of the handwritten strokes), spelling and completion modules 612 (e.g., for detecting spelling errors and/or identifying corrected words and/or subsequent words for auto-completion), one or more user interface modules 614 (e.g., for generating the presentable data of the user interfaces shown in the figures, including the handwritten strokes, error indications, and/or hints), and Al models 616 (e.g., the one or more Al models 502 of FIG. 5). In one or more embodiments, any of the one or more devices 602 may receive handwritten strokes, analyze the handwritten strokes, detect errors, and correct the handwritten text locally.
In one or more embodiments, the one or more devices 602 may receive handwritten strokes on a screen or touchpad, such as with the stylus 604 or a user’s finger, representing handwritten characters. The handwriting modules 610 may analyze the handwritten strokes to identify the characters represented by the handwritten strokes based on the X and Y coordinates of the strokes on the one or more devices 602. The spelling and completion modules 612 may assess the identified characters for spelling and/or auto-completion. The analysis and indication of a spelling error and/or suggested auto-completion may occur in real-time so that the one or more devices 602 may notify the user of errors/suggestions prior to completing handwritten strokes. In this manner, the enhanced techniques herein differ from the way that a human operator, such as a teacher or other human instructor, would analyze and correct handwriting.
In one or more embodiments, the one or more devices 602 and/or the one or more remote devices 608 may use machine learning (e.g., the Al models 616) for one or multiple aspects of the spelling analysis and correction and/or the auto-correction. For example, a machine learning model may be used to assess the handwritten strokes as inputs, and identify the characters represented by the strokes based on features of the strokes, such as the X and Y coordinates of the strokes on the device. A language model may be used to identify words represented by characters, and whether those words are spelled correctly and/or whether subsequent words not already entered by the user are likely to follow the words entered by the user (e.g., are words that can be suggested for auto-completion). The Al models 616 also may be used for handwriting synthesis. The handwriting synthesis may use AI/ML, such as deep learning, with a large dataset to train one or more models to output characters based on similarities and differences between features of handwritten characters. For example, the training data may include many versions of characters handwritten individually and in combination with other letters. The Al models 616 may be trained to identify the similarities and differences between like characters and combinations of characters so that when the user’s actual handwritten strokes are input to the one or more models, the one or more models may recognize the features of the handwritten strokes and mimic the features when generating the selected characters for replacement/auto-completion.
In one or more embodiments, the error indication may be presented with hints for how to correct the error. For example, when the text in error, or its annotation (e.g., underline, highlight, different text color than the characters with no errors, etc.) is selected by the user via the device, the device may present suggestions for how to correct the error.
It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.
FIG. 7 is a diagram illustrating an example of a computing system 700 that may be used in implementing embodiments of the present disclosure.
FIG. 7 is a block diagram illustrating an example of a computing device or computer system 700 which may be used in implementing the embodiments of the components disclosed above. For example, the computing system 700 of FIG. 7 may represent at least a portion of the one or more devices 602 and/or the one or more remote devices 608 of FIG. 6, as discussed above. The computer system (system) includes one or more processors 702-706. Processors 702-706 may include one or more internal levels of cache (not shown) and a bus controller 722 or bus interface unit to direct interaction with the processor bus 712. Processor bus 712, also known as the host bus or the front side bus, may be used to couple the processors 702-706 with the system interface 724. System interface 724 may be connected to the processor bus 712 to interface other components of the system 700 with the processor bus 712. For example, system interface 724 may include a memory controller 718 for interfacing a main memory 716 with the processor bus 712. The main memory 716 typically includes one or more memory cards and a control circuit (not shown). System interface 724 may also include an input/output (I/O) interface 720 to interface one or more I/O bridges 725 or I/O devices with the processor bus 712. One or more I/O controllers and/or I/O devices may be connected with the I/O bus 726, such as I/O controller 728 and I/O device 730, as illustrated. The system 700 may include one or more handwriting devices 719 (e.g., representing at least a portion of the handwriting modules 610, the spelling and completion modules 612, the user interface modules 614, and/or the Al models 616 of FIG. 6).
I/O device 730 may also include an input device (not shown), such as an alphanumeric input device, including alphanumeric and other keys for communicating information and/or command selections to the processors 702-706. Another type of user input device includes cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processors 702-706 and for controlling cursor movement on the display device.
System 700 may include a dynamic storage device, referred to as main memory 716, or a random access memory (RAM) or other computer-readable devices coupled to the processor bus 712 for storing information and instructions to be executed by the processors 702-706. Main memory 716 also may be used for storing temporary variables or other intermediate information during execution of instructions by the processors 702-706. System 700 may include a read only memory (ROM) and/or other static storage device coupled to the processor bus 712 for storing static information and instructions for the processors 702-706. The system outlined in FIG. 7 is but one possible example of a computer system that may employ or be configured in accordance with aspects of the present disclosure.
According to one embodiment, the above techniques may be performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 716. These instructions may be read into main memory 716 from another machine-readable medium, such as a storage device. Execution of the sequences of instructions contained in main memory 716 may cause processors 702-706 to perform the process steps described herein. In alternative embodiments, circuitry may be used in place of or in combination with the software instructions. Thus, embodiments of the present disclosure may include both hardware and software components.
A machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Such media may take the form of, but is not limited to, non-volatile media and volatile media and may include removable data storage media, non-removable data storage media, and/or external storage devices made available via a wired or wireless network architecture with such computer program products, including one or more database management products, web server products, application server products, and/or other additional software components. Examples of removable data storage media include Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc Read-Only Memory (DVD- ROM), magneto-optical disks, flash drives, and the like. Examples of non-removable data storage media include internal magnetic hard disks, SSDs, and the like. The one or more memory devices 706 may include volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and/or non-volatile memory (e.g., readonly memory (ROM), flash memory, etc.).
Computer program products containing mechanisms to effectuate the systems and methods in accordance with the presently described technology may reside in main memory 716, which may be referred to as machine-readable media. It will be appreciated that machine- readable media may include any tangible non-transitory medium that is capable of storing or encoding instructions to perform any one or more of the operations of the present disclosure for execution by a machine or that is capable of storing or encoding data structures and/or modules utilized by or associated with such instructions. Machine-readable media may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more executable instructions or data structures.
The following examples are not meant to be exclusive.
Example 1 may include a method for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, the method comprising: receiving, by at least one processor of a device, first handwritten strokes entered on the device by a user; identifying, by the at least one processor, characters represented by the first handwritten strokes; inputting, by the at least one processor, the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating, by the at least one processor, an indication of the suggested word; presenting, by the at least one processor, on the device, the indication of the suggested word; receiving, by the at least one processor, a user selection of the indication; and presenting, by the at least one processor, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
Example 2 may include the method of example 1 and/or any other example herein, further comprising determining that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
Example 3 may include the method of example 1 and/or any other example herein, further comprising determining that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion additional handwritten strokes not yet entered on the device by the user.
Example 4 may include the method of example 3 and/or any other example herein, further comprising generating, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein presenting the second handwritten strokes comprises presenting the suggested word and the second suggested word.
Example 5 may include the method of example 1 and/or any other example herein, wherein generating the suggested word comprises: determining, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determining that the confidence score exceeds a confidence score level.
Example 6 may include the method of example 1 and/or any other example herein, further comprising: determining, by the machine learning model, a first confidence score that the characters represent a first word; determining that the first confidence score is less than a confidence score threshold; receiving third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identifying second characters represented by the third handwritten strokes; determining, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determining that the second confidence score exceeds the confidence score threshold.
Example 7 may include the method of example 1 and/or any other example herein, further comprising: receiving feedback for the machine learning model based on the user selection; and adjusting a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word. Example 8 may include the method of example 1 and/or any other example herein, further comprising: generating, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
Example 9 may include the method of example 1 and/or any other example herein, wherein identifying the characters comprises: determining, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determining that the confidence score exceeds a confidence score threshold.
Example 10 may include a system for presenting suggested handwritten characters with a device based on handwritten characters entered on the device the system comprising memory coupled to at least one processor of an edge gateway backend system, the at least one processor configured to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
Example 11 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word
Example 12 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
Example 13 may include the system of example 12 and/or any other example herein, wherein the at least one processor is further configured to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
Example 14 may include the system of example 10 and/or any other example herein, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
Example 15 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
Example 16 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
Example 17 may include the system of example 10 and/or any other example herein, wherein the at least one processor is further configured to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
Example 18 may include the system of example 10 and/or any other example herein, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
Example 19 may include a computer-readable storage medium comprising instructions to cause at least one processor for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, upon execution of the instructions by the at least one processor, to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
Example 20 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
Example 21 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
Example 22 may include the computer-readable storage medium of example 21 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
Example 23 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
Example 24 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
Example 25 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
Example 26 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein execution of the instructions further causes the at least one processor to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
Example 27 may include the computer-readable storage medium of example 19 and/or any other example herein, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
Example 28 may include an apparatus comprising means for: receiving first handwritten strokes entered on the device by a user; identifying characters represented by the first handwritten strokes; inputting the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating an indication of the suggested word; presenting the indication of the suggested word; receiving a user selection of the indication; and presenting, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style. Embodiments of the present disclosure include various steps, which are described in this specification. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or specialpurpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware, software and/or firmware.
Various modifications and additions can be made to the exemplary embodiments discussed without departing from the scope of the present invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combinations of features and embodiments that do not include all of the described features. Accordingly, the scope of the present invention is intended to embrace all such alternatives, modifications, and variations together with all equivalents thereof.

Claims

CLAIMS WHAT IS CLAIMED:
1. A method for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, the method comprising: receiving, by at least one processor of a device, first handwritten strokes entered on the device by a user; identifying, by the at least one processor, characters represented by the first handwritten strokes; inputting, by the at least one processor, the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generating, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generating, by the at least one processor, an indication of the suggested word; presenting, by the at least one processor, on the device, the indication of the suggested word; receiving, by the at least one processor, a user selection of the indication; and presenting, by the at least one processor, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
2. The method of claim 1, further comprising: determining that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
3. The method of claim 1, further comprising: determining that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion additional handwritten strokes not yet entered on the device by the user.
4. The method of claim 3, further comprising: generating, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein presenting the second handwritten strokes comprises presenting the suggested word and the second suggested word.
5. The method of claim 1, wherein generating the suggested word comprises: determining, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determining that the confidence score exceeds a confidence score level.
6. The method of claim 1, further comprising: determining, by the machine learning model, a first confidence score that the characters represent a first word; determining that the first confidence score is less than a confidence score threshold; receiving third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identifying second characters represented by the third handwritten strokes; determining, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determining that the second confidence score exceeds the confidence score threshold.
7. The method of claim 1, further comprising: receiving feedback for the machine learning model based on the user selection; and adjusting a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
8. The method of claim 1, further comprising: generating, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
9. The method of claim 1, wherein identifying the characters comprises: determining, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determining that the confidence score exceeds a confidence score threshold.
10. A system for presenting suggested handwritten characters with a device based on handwritten characters entered on the device the system comprising memory coupled to at least one processor of an edge gateway backend system, the at least one processor configured to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
11. The system of claim 10, wherein the at least one processor is further configured to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
12. The system of claim 10, wherein the at least one processor is further configured to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
13. The system of claim 12, wherein the at least one processor is further configured to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
14. The system of claim 10, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
15. The system of claim 10, wherein the at least one processor is further configured to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
16. The system of claim 10, wherein the at least one processor is further configured to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
17. The system of claim 10, wherein the at least one processor is further configured to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
18. The system of claim 10, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
19. A computer-readable storage medium comprising instructions to cause at least one processor for presenting suggested handwritten characters with a device based on handwritten characters entered on the device, upon execution of the instructions by the at least one processor, to: receive first handwritten strokes entered on the device by a user; identify characters represented by the first handwritten strokes; input the characters into a machine learning model configured to identify suggested words to be presented as second handwritten strokes on the device; generate, by the machine learning model, based on the characters, a suggested word to be presented as the second handwritten strokes on the device; generate an indication of the suggested word; present, on the device, the indication of the suggested word; receive a user selection of the indication; and present, on the device, based on the user selection and a style of the first handwritten strokes, the second handwritten strokes and the first handwritten strokes, wherein the second handwritten strokes are presented using the style.
20. The computer-readable storage medium of claim 19, wherein execution of the instructions further causes the at least one processor to: determine that the characters represent a misspelled word, wherein the suggested word is a correctly spelled version of the misspelled word, and wherein the indication of the suggested word indicates that the characters represent the misspelled word.
21. The computer-readable storage medium of claim 19, wherein execution of the instructions further causes the at least one processor to: determine that the suggested word is likely to be entered by the user subsequent to the first handwritten strokes, wherein the indication of the suggested word is a suggested auto-completion of additional handwritten strokes not yet entered on the device by the user.
22. The computer-readable storage medium of claim 21, wherein execution of the instructions further causes the at least one processor to: generate, by the machine learning model, based on the characters, a second suggested word to be presented as the second handwritten strokes on the device, wherein to present the second handwritten strokes comprises presenting the suggested word and the second suggested word.
23. The computer-readable storage medium of claim 19, wherein to generate the suggested word comprises: determine, by the machine learning model, a confidence score that the characters represent a misspelled version of the suggested word; and determine that the confidence score exceeds a confidence score level.
24. The computer-readable storage medium of claim 19, wherein execution of the instructions further causes the at least one processor to: determine, by the machine learning model, a first confidence score that the characters represent a first word; determine that the first confidence score is less than a confidence score threshold; receive third handwritten strokes entered on the device by the user after the entry of the first handwritten strokes; identify second characters represented by the third handwritten strokes; determine, by the machine learning model, a second confidence score that the characters and the second characters represent the suggested word; and determine that the second confidence score exceeds the confidence score threshold.
25. The computer-readable storage medium of claim 19, wherein execution of the instructions further causes the at least one processor to: receive feedback for the machine learning model based on the user selection; and adjust a confidence score, for the machine learning model, indicating a likelihood that the characters represent the suggested word.
26. The computer-readable storage medium of claim 19, wherein execution of the instructions further causes the at least one processor to: generate, by a second machine learning model, the second handwritten strokes based on features of the style, wherein the second machine learning model is configured to synthesize characters of the suggested words for presentation based on handwriting features of characters.
27. The computer-readable storage medium of claim 19, wherein to identify the characters comprises: determine, based on features of the first handwritten strokes, a confidence score that the first handwritten strokes represent the characters; and determine that the confidence score exceeds a confidence score threshold.
PCT/EP2023/085637 2023-08-08 2023-12-13 Enhanced spell checking and auto-completion for text that is handwritten on a computer device WO2025031608A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363531380P 2023-08-08 2023-08-08
US63/531,380 2023-08-08

Publications (1)

Publication Number Publication Date
WO2025031608A1 true WO2025031608A1 (en) 2025-02-13

Family

ID=89428694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/085637 WO2025031608A1 (en) 2023-08-08 2023-12-13 Enhanced spell checking and auto-completion for text that is handwritten on a computer device

Country Status (1)

Country Link
WO (1) WO2025031608A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007034871A (en) * 2005-07-29 2007-02-08 Sanyo Electric Co Ltd Character input apparatus and character input apparatus program
US20160154580A1 (en) * 2014-03-12 2016-06-02 Kabushiki Kaisha Toshiba Electronic apparatus and method
US20200074167A1 (en) * 2018-09-04 2020-03-05 Nuance Communications, Inc. Multi-Character Text Input System With Audio Feedback and Word Completion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007034871A (en) * 2005-07-29 2007-02-08 Sanyo Electric Co Ltd Character input apparatus and character input apparatus program
US20160154580A1 (en) * 2014-03-12 2016-06-02 Kabushiki Kaisha Toshiba Electronic apparatus and method
US20200074167A1 (en) * 2018-09-04 2020-03-05 Nuance Communications, Inc. Multi-Character Text Input System With Audio Feedback and Word Completion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"HHAI2022: Augmenting Human Intellect : Proceedings of the First International Conference on Hybrid Human-Artificial Intelligence", 19 September 2022, IOS PRESS, ISBN: 978-1-64368-309-6, ISSN: 0922-6389, article KUZNETSOV KONSTANTIN ET AL: "SpellInk: Interactive Correction of Spelling Mistakes in Handwritten Text : Proceedings of the First International Conference on Hybrid Human-Artificial Intelligence", XP093147227, DOI: 10.3233/FAIA220216 *

Similar Documents

Publication Publication Date Title
US8879845B2 (en) Character recognition for overlapping textual user input
EP2535844A2 (en) Character recognition for overlapping textual user input
CN110555403A (en) handwritten character evaluation method and system
US9946704B2 (en) Tone mark based text suggestions for chinese or japanese characters or words
US20200026766A1 (en) Method for translating characters and apparatus therefor
JPH06139229A (en) Kana-kanji converting method using pen-type stylus and computer
JPH02289100A (en) Kanji coding and decoding equipment
JP2019113803A (en) Chinese character learning device
US20060126946A1 (en) Systems and methods for automatic graphical sequence completion
TWI464678B (en) Handwritten input for asian languages
Yang et al. Spell Checking for Chinese.
JP2012234512A (en) Method for text segmentation, computer program product and system
JP7095450B2 (en) Information processing device, character recognition method, and character recognition program
WO2025031608A1 (en) Enhanced spell checking and auto-completion for text that is handwritten on a computer device
JP7690465B2 (en) Ink data correction method, information processing device, and program
CN116246278A (en) Character recognition method and device, storage medium and electronic equipment
JP7285018B2 (en) Program, erroneous character detection device, and erroneous character detection method
WO2025031609A1 (en) Enhanced assistant for math that is handwritten on a computer device
US12307189B2 (en) Completing typeset characters using handwritten strokes
JP2984170B2 (en) Online handwritten character recognition device
JPH0724054B2 (en) Data processing device
JP2023112400A (en) Input device, input device control method, and information processing device
JP2989387B2 (en) Term recognition device and term recognition method in input character processing device
JP5029301B2 (en) Questioning apparatus and computer program
JPH08235317A (en) Extended self-punctuation data input method for pen computer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23833027

Country of ref document: EP

Kind code of ref document: A1