CN109614846A - Manage real-time handwriting recognition - Google Patents
Manage real-time handwriting recognition Download PDFInfo
- Publication number
- CN109614846A CN109614846A CN201811217821.5A CN201811217821A CN109614846A CN 109614846 A CN109614846 A CN 109614846A CN 201811217821 A CN201811217821 A CN 201811217821A CN 109614846 A CN109614846 A CN 109614846A
- Authority
- CN
- China
- Prior art keywords
- strokes
- user
- character
- stroke
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/22—Character recognition characterised by the type of writing
- G06V30/226—Character recognition characterised by the type of writing of cursive writing
- G06V30/2264—Character recognition characterised by the type of writing of cursive writing using word shape
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/28—Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
- G06V30/287—Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of Kanji, Hiragana or Katakana characters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/28—Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
- G06V30/293—Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of characters other than Kanji, Hiragana or Katakana
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Character Discrimination (AREA)
- User Interface Of Digital Computer (AREA)
- Document Processing Apparatus (AREA)
- Character Input (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to manage real-time handwriting recognition.The invention discloses a kind of method for being related to the technology for providing hand-write input function on a user device, system and computer-readable mediums.Training handwriting recognition module is with glossary, which includes a variety of not overlay texts and be able to use single handwriting recognition model to identify tens of thousands of a characters.The handwriting input module accords with handwriting input for multiword and provides the real-time, handwriting recognition unrelated with stroke order and stroke direction.Specifically, it is provided in real time for multiword symbol or the identification of sentence level Chinese hand-writing, the handwriting recognition unrelated with stroke order and stroke direction.It also discloses for providing the user interface of hand-write input function.
Description
The application be international filing date be on May 30th, 2014, on November 27th, 2015 enter National Phase in China,
Application No. is points of the application for a patent for invention of 201480030897.0, entitled " managing real-time handwriting recognition " for China national
Case application.
Technical field
This specification is related to providing hand-write input function on the computing device, and relates more specifically on the computing device
Real-time, more texts, the handwriting recognition unrelated with stroke order and input function are provided.
Background technique
Hand-written inputting method is that one kind is set for the calculating equipped with touch sensitive surface (for example, touch-sensitive display panel or touch tablet)
Standby important optionally input method.The especially some Asia of many users or Arab countries/area user get used to grass
Book style is write, and compared with typewriting on keyboard, may feel to write with long-hand and want more comfortable.
For certain language mark writing systems such as Chinese character or Japanese Chinese character (also referred to as Chinese characters), in spite of another
The syllable input method (such as phonetic or assumed name) of choosing can be used for inputting the character of corresponding language mark writing system, but not know in user
How when spelling logographic characters in terms of voice and carrying out incorrect Chinese phonetic spelling using logographic characters, this syllable-like is defeated in road
Entering method just seems insufficient.It therefore, can be on the computing device using handwriting input for cannot very well or never combine into syllables
Become most important for the user of the words of correlative mark writing system.
Although having popularized hand-write input function in the certain areas in the world, but still need to improve.Specifically, people
Hand-written script be height it is different (for example, stroke order, size, in terms of), and the hand-written knowledge of high quality
Other software is complicated and needs extensive training.In this way, in the mobile device with limited memory and computing resource
The real-time handwriting recognition of upper offer high efficiency has become a kind of challenge.
In addition, the user of many countries understands multilingual, and may frequently need in the multiculture world of today
Write more than one text (for example, message that English movie name is mentioned with Chinese writing).However, will identification during writing
System is manually switched to desired text or language is cumbersome and inefficient.In addition, conventional more text handwriting recognition technologies
Practicability critical constraints, because improving the recognition capability of equipment so that handling kinds of words considerably increases answering for identifying system simultaneously
Polygamy and demand to computer resource.
In addition, conventional hand-writing technique is depended critically upon specific to the particularity of language or text to realize identification accuracy.
Such particularity is not easy to be transplanted to other language or text.Therefore, adding handwriting input ability for new language or text is
One is not easy the difficult task received by the supplier of software and equipment.Thus, the user of multilingual lacks for it perhaps
The important alternative input method of electronic equipment.
Conventional user interfaces for providing handwriting input include region for receiving handwriting input from user and are used for
Show the region of handwriting recognition results.On the portable device with small shape, it is still desirable to be carried out to user interface significant
Improvement, generally to improve efficiency, accuracy and user experience.
Summary of the invention
Present specification describes a kind of for providing the technology of more text handwriting recognitions using universal identification device.Use needle
The universal identification device is trained to big more text corpus of the writing sample of the character in different language and text.General knowledge
The training of other device independently of language, independently of text, independently of stroke order and independently of stroke direction.Therefore, same knowledge
Other device can identify hybrid language, mixing Character writing input, without carrying out hand between input language during use
Dynamic switching.In addition, universal identification device is light enough, to be used as independent module on the mobile apparatus, so that global
Handwriting input is able to carry out in different language and text used in different regions.
In addition, because for stroke order it is unrelated and unrelated with stroke direction and do not need on stroke level when
Between or the space export feature of order information train universal identification device, so universal identification device is relative to conventional time-based
Recognition methods (for example, the recognition methods for being based on Hidden Markov Models (HMM)) provides many supplementary features and advantage.Example
Such as, allow user to input the stroke of one or more characters, phrase and sentence in any sequence, and still obtain identical
Recognition result.Consequently it is now possible to carry out unordered more character inputs and carry out unordered corrigendum to the character being previously entered
(for example, addition or rewriting).
In addition, universal identification device is used for real-time handwriting recognition, wherein the temporal information for each stroke is available, and optionally
Ground is used to that handwriting input to be disambiguated or be divided before by the identification of universal identification device execution character.As described herein and stroke order
Unrelated real-time identification is different from conventional identified off-line method (for example, optical character identification (OCR)), and can provide than normal
Advise identified off-line method better performance.In addition, universal identification device as described herein is capable of handling the height of individual writing style
Change (for example, variation of speed, rhythm, stroke order, stroke direction, stroke continuity etc.), without in identifying system
Clearly it is embedded in the distinction of different variations (for example, variation of speed, rhythm, stroke order, stroke direction, stroke continuity etc.)
Feature thus reduces the overall complexity of identifying system.
As described herein, in some embodiments, optionally stroke distributed intelligence derived from the time is reintroduced to logical
With in identifier, carried out with enhancing identification accuracy and between the similar identification output of appearance for same input picture
It disambiguates.Stroke order and stroke side independently of universal identification device will not be destroyed by being reintroduced back to stroke distributed intelligence derived from the time
To because what time export feature and space export were characterized in obtaining by independent training process, and only complete it is independent
It is just combined in handwriting recognition model after training.In addition, stroke distributed intelligence derived from conscientious design time, catches it
Obtain the distinction time response of the similar character of appearance, and the difference independent of the stroke order for externally seeing similar character
It is expressly understood that.
There is also described herein a kind of for providing the user interface of hand-write input function.
In some embodiments, a kind of method providing more text handwriting recognitions includes: based on more text training corpus
Space export feature train more text handwriting recognition models, which includes not be overlapped at least three kinds
The corresponding corresponding handwriting samples of the character of text;And it is instructed using the space export feature for being directed to more text training corpus
Experienced more text handwriting recognition models provide real-time handwriting recognition come the handwriting input for user.
In some embodiments, a kind of method providing more text handwriting recognitions includes: to receive more text handwriting recognition moulds
Type, the space export feature which has been directed to more text training corpus are trained to, more text training languages
Material library includes and the corresponding corresponding handwriting samples of character of at least three kinds not overlay texts;Handwriting input, the hand are received from user
Writing input includes the one or more handwritten strokes provided on the touch sensitive surface for be couple to user equipment;And in response to receiving
To handwriting input, more text handwriting recognition models that feature is trained to are exported based on the space for being directed to more text training corpus
To provide one or more handwriting recognition results in real time to user.
In some embodiments, a kind of method providing real-time handwriting recognition includes: to receive multiple handwritten strokes from user,
Multiple handwritten stroke corresponds to hand-written character;Input picture is generated based on multiple handwritten strokes;It is provided to handwriting recognition model
Input picture executes identification in real time with classifying hand-written characters, and wherein handwriting recognition model provides the hand-written knowledge unrelated with stroke order
Not;And when receiving multiple handwritten strokes, identical first output character of real-time display has connect from user without considering
The respective sequence of the multiple handwritten strokes received.
In some embodiments, this method further comprises: more than second a handwritten strokes are received from user, this more than second
Handwritten stroke corresponds to the second hand-written character;The second input picture is generated based on more than second a handwritten strokes;To handwriting recognition
Model provides the second input picture, to execute identification in real time to the second hand-written character;And works as and receive a writing pencil more than second
When picture, real-time display the second output character corresponding with more than second a handwritten strokes, wherein the first output character and the second output
Character is simultaneously displayed in spatial sequence, with a handwriting input more than customer-furnished first and more than second a handwriting inputs
Respective sequence is unrelated.
In some embodiments, wherein the default presentation direction of the pen interface along user equipment, a hand more than second
Stroke is write spatially after handwritten stroke a more than first, and along default presentation direction, the second output character is in space sequence
In column after the first output character, and this method further comprises: third handwritten stroke is received from user, it is hand-written to correct
Character, the third handwritten stroke are temporarily received after handwritten stroke a more than first and more than second a handwritten strokes;In response to
Third handwritten stroke is received based on the relative proximity of a handwritten stroke more than third handwritten stroke and first come to same identification
Unit distributes handwritten stroke as more than first a handwritten strokes;It is generated based on more than first a handwritten strokes and third handwritten stroke
The modified input picture of institute;The modified input picture of institute is provided to handwriting recognition model to execute reality to the modified hand-written character of institute
When identify;And in response to receiving third handwriting input show with modified input picture third output character,
Middle third output character replace the first output character and along default presentation direction in spatial sequence with the second output character simultaneously
It is shown.
In some embodiments, this method further comprises: by third in the candidate display region of pen interface
While output character and the second output character are shown as recognition result simultaneously, are received from user and delete input;And in response to
Input is deleted, while third output character is kept in the recognition result, deletes the second output character from recognition result.
In some embodiments, when providing each handwritten stroke in handwritten stroke by user, in pen interface
Handwriting input region in a handwritten stroke of real-time rendering more than first, more than second a handwritten strokes and third handwritten stroke;And
Input is deleted in response to receiving, the phase to more than first a handwritten strokes and third handwritten stroke is kept in handwriting input region
While should rendering, the corresponding rendering to more than second a handwritten strokes is deleted from handwriting input region.
In some embodiments, a kind of method providing real-time handwriting recognition includes: to receive handwriting input, the hand from user
Writing input includes the one or more handwritten strokes provided in the handwriting input region of pen interface;Based on handwriting recognition
Model to identify multiple output characters for handwriting input;Multiple output characters are divided into two based on predetermined classification standard
A or more classification;It is shown in two or more classifications in the initial views in the candidate display region of pen interface
First category corresponding output character, wherein the initial views in candidate display region with for calling the expansion in candidate display region
Showing for development can indicate to be simultaneously provided;It receives for select to show that the user that can indicate inputs for call extended view;
And inputted in response to the user, it shows previously in the extended view in candidate display region not in the initial of candidate display region
The corresponding output character of the first category in two or more classifications shown in view and at least second category it is corresponding
Output character.
In some embodiments, a kind of method providing real-time handwriting recognition includes: to receive handwriting input, the hand from user
Writing input includes the multiple handwritten strokes provided in the handwriting input region of pen interface;Based on handwriting recognition model come
Identify that multiple output characters, multiple output character include at least of the text from natural human language from handwriting input
One emoticon character and at least the first character;And display includes from certainly in the candidate display region of pen interface
The recognition result of first character of the first emoticon character sum described in the text of right human language.
In some embodiments, a kind of method providing handwriting recognition includes: to receive handwriting input from user, this is hand-written defeated
Enter multiple handwritten strokes including providing in the touch sensitive surface for being couple to equipment;In the handwriting input region of pen interface
Middle real-time rendering states multiple handwritten strokes;It is received above multiple handwritten strokes in nip gesture input and extension gesture input
One;When receiving nip gesture input, it is based on and being handled multiple handwritten strokes as single recognition unit
Multiple handwritten strokes generate the first recognition result;When receive extension gesture input when, by using multiple handwritten strokes as by
Two independent recognition units that extension gesture input is pulled open are handled and generate the second recognition result based on multiple handwritten strokes;
And when generating the corresponding recognition result in the first recognition result and the second recognition result, in pen interface
Recognition result generated is shown in candidate display region.
In some embodiments, a kind of method providing handwriting recognition includes: to receive handwriting input from user, this is hand-written defeated
Enter multiple handwritten strokes including providing in the handwriting input region of pen interface;It is identified from multiple handwritten strokes more
A recognition unit, each recognition unit include the respective subset of multiple handwritten strokes;Generation includes knowing from multiple recognition units
More character identification results of other respective symbols;More character recognition knots are shown in the candidate display region of pen interface
Fruit;While showing more character identification results in candidate display region, is received from user and delete input;And in response to receiving
It is inputted to deleting, the more character identification results removal end character shown from candidate display region.
In some embodiments, a kind of method providing real-time handwriting recognition comprises determining that the orientation of equipment;According to equipment
Pen interface is provided in equipment in horizontal input pattern in first orientation, wherein by defeated in horizontal input pattern
The corresponding a line handwriting input entered is divided into one or more corresponding recognition units along horizontal presentation direction;And according to equipment at
Pen interface is provided in equipment in vertical input pattern in second orientation, wherein by inputting in vertical input pattern
Corresponding a line handwriting input be divided into one or more corresponding recognition units along vertical writing direction.
In some embodiments, a kind of method providing real-time handwriting recognition includes: to receive handwriting input, the hand from user
Writing input includes the multiple handwritten strokes provided on the touch sensitive surface for be couple to equipment;In the handwriting input of pen interface
Multiple handwritten strokes are rendered in region;Multiple handwritten strokes are divided into two or more recognition units, each recognition unit
Respective subset including multiple handwritten strokes;Edit requests are received from user;In response to edit requests, visually distinguish hand-written
Two or more recognition units in input area;And it provides for independently deleting two or more from handwriting input region
The device of each recognition unit in a recognition unit.
In some embodiments, a kind of method providing real-time handwriting recognition includes: to receive the first handwriting input from user,
First handwriting input includes multiple handwritten strokes, and multiple handwritten strokes are formed along the handwriting input with pen interface
Multiple recognition units of the associated corresponding presentation direction distribution in region;When providing handwritten stroke by user, in handwriting input
Each handwritten stroke in multiple handwritten strokes is rendered in region;It is single for multiple identifications after rendering recognition unit completely
Each recognition unit in member starts process of fading out accordingly, wherein during the process of fading out accordingly, it is hand-written defeated to first
The rendering of recognition unit in entering gradually is faded out;It is occupied from user's reception by the recognition unit to fade out in multiple recognition units
Second handwriting input of the overlying regions in handwriting input region;And in response to receiving the second handwriting input: in handwriting input
The second handwriting input is rendered in region;And all recognition units to fade out are removed from handwriting input region.
In some embodiments, a kind of method providing handwriting recognition includes: one group of stand-alone training handwriting recognition model
Space exports feature and one group of time exports feature, in which: trains one group of space export special for the corpus of training image
It levies, each image in the corpus of the training image is the figure of the handwriting samples for the respective symbols concentrated for output character
Picture, and one group of time export feature is trained for the corpus of stroke distribution overview, each stroke distribution overview is with number
Mode characterizes the spatial distribution of multiple strokes in the handwriting samples for the respective symbols concentrated for output character;And combination hand
Write the one group of space export feature and one group of time export feature in identification model;And using handwriting recognition model come for user
Handwriting input real-time handwriting recognition is provided.
One or more embodiments of theme described in this specification are elaborated in attached drawing and following description
Details.According to specification, drawings and the claims, other features, aspects and advantages of the theme be will become obvious.
Detailed description of the invention
Fig. 1 is to show the block diagram of the portable multifunction device in accordance with some embodiments with touch-sensitive display.
Fig. 2 shows the portable multifunction devices in accordance with some embodiments with touch-sensitive display.
Fig. 3 is the block diagram of the exemplary multifunctional equipment in accordance with some embodiments with display and touch sensitive surface.
Fig. 4 shows in accordance with some embodiments for having the multifunctional equipment of the touch sensitive surface separated with display
Exemplary user interface.
Fig. 5 is to show the block diagram of the operating environment of hand-written input system in accordance with some embodiments.
Fig. 6 is the block diagram of more text handwriting recognition models in accordance with some embodiments.
Fig. 7 is the flow chart of the example process in accordance with some embodiments for training more text handwriting recognition models.
Fig. 8 A- Fig. 8 B shows the display in accordance with some embodiments on portable multifunction device, and more texts are hand-written in real time
The exemplary user interface of identification and input.
Fig. 9 A- Fig. 9 B is for providing the example of more text handwriting recognitions and input in real time on portable multifunction device
The flow chart of property process.
Figure 10 A- Figure 10 C is in accordance with some embodiments for providing on portable multifunction device in real time and stroke
The flow chart of the example process of sequentially unrelated handwriting recognition and input.
Figure 11 A- Figure 11 K shows in accordance with some embodiments for the selectivity in the normal view in candidate display region
Ground shows a kind of recognition result of classification and selectively shows other classifications in the extended view in candidate display region
The exemplary user interface of recognition result.
Figure 12 A- Figure 12 B is in accordance with some embodiments for selectively showing in the normal view in candidate display region
Show a kind of recognition result of classification and selectively shows the identification of other classifications in the extended view in candidate display region
As a result the flow chart of example process.
Figure 13 A- Figure 13 E shows in accordance with some embodiments for inputting emoticon character by handwriting input
Exemplary user interface.
Figure 14 is in accordance with some embodiments for inputting the example process of emoticon character by handwriting input
Flow chart.
Figure 15 A- Figure 15 K shows in accordance with some embodiments for hand-written to notify using nip gesture or extension gesture
How the handwriting input currently accumulated is divided into the exemplary user interface of one or more recognition units by input module.
Figure 16 A- Figure 16 B is in accordance with some embodiments for notifying handwriting input using nip gesture or extension gesture
How the handwriting input currently accumulated is divided into the flow chart of the example process of one or more recognition units by module.
Figure 17 A- Figure 17 H shows the handwriting input in accordance with some embodiments for user and provides character deletion one by one
Exemplary user interface.
Figure 18 A- Figure 18 B is that the handwriting input in accordance with some embodiments for user provides showing for character deletion one by one
The flow chart of example property process.
Figure 19 A- Figure 19 F shows in accordance with some embodiments between vertical writing mode and horizontal write mode
The exemplary user interface of switching.
Figure 20 A- Figure 20 C shows in accordance with some embodiments between vertical writing mode and horizontal write mode
The flow chart of the example process of switching.
Figure 21 A- Figure 21 H shows in accordance with some embodiments for providing for showing and selectively deleting in user
Handwriting input in the user interface of the device of single recognition unit that identifies.
Figure 22 A- Figure 22 B is in accordance with some embodiments for providing for showing and selectively deleting hand-written in user
The flow chart of the example process of the device of the single recognition unit identified in input.
Figure 23 A- Figure 23 L shows in accordance with some embodiments existing hand-written defeated in handwriting input region for utilizing
Enter the new handwriting input that top provides and be used as hint confirmation input, for inputting the identification for being directed to existing handwriting input and showing
As a result exemplary user interface.
Figure 24 A- Figure 24 B is in accordance with some embodiments for utilizing the existing handwriting input in handwriting input region
The new handwriting input just provided is as confirmation input is implied, for inputting the recognition result for being directed to existing handwriting input and showing
Example process flow chart.
Figure 25 A- Figure 25 B is in accordance with some embodiments for that will export stroke distribution letter the time based on space export feature
Breath be integrated into handwriting recognition model, without destroy handwriting recognition model stroke order and stroke direction independence it is exemplary
The flow chart of process.
Figure 26 is to show in accordance with some embodiments be independently trained and then to exemplary hand-written discrimination system
Space export feature and the time export feature carry out integrated block diagram.
Figure 27 is to show the block diagram of the illustrative methods of the stroke distribution overview for calculating character.
Throughout the drawings, similar reference label refers to corresponding component.
Specific embodiment
Many electronic equipments have graphic user interface, which has the soft keyboard for character input.
On some electronic equipments, user is also possible to install or enable pen interface, which allows user
Character is inputted by hand-written on the touch-sensitive display panel or touch sensitive surface for be couple to equipment.Conventional handwriting recognition input method and
User interface has several problems and disadvantages.For example,
In general, conventional hand-write input function language or text enables one by one one by one.Every kind of additional input language
Need to install the independent handwriting recognition model for occupying independent memory space and memory.By combining for the hand-written of different language
Identification model can hardly provide synergistic effect, and hybrid language or mixing text handwriting recognition are due to complicated ambiguity elimination
Process will usually take a long time.
In addition, because conventional hand-written discrimination system is depended critically upon specific to the characteristic of language or specific to text
Characteristic is to be used for character recognition.So the accuracy of identification hybrid language handwriting input is very poor.In addition, the language identified can
It is very limited with combining.Most of system is manual before needing user to provide handwriting input in every kind of non-default language or text
Specify the desired handwriting recognizer specific to language.
Many existing identification models hand-written in real time need temporal information or order information about stroke level one by one,
Processing how can written character highly variable (for example, due to writing style and personal habits, the shape of stroke, length,
There is the changeability of height in rhythm, segmentation, sequence and direction) when, this will generate inaccurate recognition result.Some systems also need
User is when providing handwriting input in accordance with stringent space criteria and time standard (for example, wherein to the big of each character input
Small, sequence and time frame have built-in hypothesis).And these standards, which have any deviation all, can lead to the inaccurate knowledge for being difficult to correct
Other result.
Currently, most of pen interface in real time only allows user once to input several characters.Long phrase or sentence
Input be broken down into short syntagma and by independent input.This unnatural input not only keeps the smooth band of writing to user
Cognitive load is carried out, and user is made to be difficult to correct or revise the character or phrase inputted in the early time.
Embodiments described just below solves these problems and relevant issues.
Following figure 1-Fig. 4 provides the description to example devices.Fig. 5, Fig. 6 and Figure 26-Figure 27 show exemplary hand
Write identification and input system.Fig. 8 A- Fig. 8 B, Figure 11 A- Figure 11 K, Figure 13 A- Figure 13 E, Figure 15 A- Figure 15 K, Figure 17 A- Figure 17 H,
Figure 19 A- Figure 19 F, Figure 21 A- Figure 21 H and Figure 23 A- Figure 23 L show example user circle for handwriting recognition and input
Face.Fig. 7, Fig. 9 A- Fig. 9 B, Figure 10 A- Figure 10 C, Figure 12 A- Figure 12 B, Figure 14, Figure 16 A- Figure 16 B, Figure 18 A- Figure 18 B, Figure 20 A-
Figure 20 C, Figure 22 A- Figure 22 B, Figure 24 A- Figure 24 B and Figure 25 are to show the side for realizing handwriting recognition and input on a user device
The flow chart of method, this method include training handwriting recognition model, provide real-time handwriting recognition results, provide for inputting and correcting
The device of handwriting input, and provide for inputting device of the recognition result as text input.Fig. 8 A- Fig. 8 B, Figure 11 A- figure
11K, Figure 13 A- Figure 13 E, Figure 15 A- Figure 15 K, Figure 17 A- Figure 17 H, Figure 19 A- Figure 19 F, Figure 21 A- Figure 21 H, Figure 23 A- Figure 23 L
In user interface for show Fig. 7, Fig. 9 A- Fig. 9 B, figure l0A- figure l0C, Figure 12 A- Figure 12 B, Figure 14, Figure 16 A- Figure 16 B,
Figure 18 A- Figure 18 B, Figure 20 A- Figure 20 C, Figure 22 A- Figure 22 B, the process in Figure 24 A- Figure 24 B and Figure 25.
Example devices
Now with detailed reference to embodiment, the example of these embodiments is shown in the accompanying drawings.In following detailed description
In numerous specific details are set forth, in order to provide thorough understanding of the present invention.However, will be aobvious and easy to those skilled in the art
What is seen is that the present invention can be carried out without these specific details.In other cases, it does not describe in detail ripe
Method, process, component, circuit and the network known, so as not to can unnecessarily obscure the various aspects of embodiment.
Although being also understood that term " first ", " second " etc. may be used to describe various elements herein, this
A little elements should not be limited by these terms.These terms are only intended to distinguish an element with another element.For example,
First contact can be named as the second contact, and similarly the second contact can be named as the first contact, without departing from this hair
Bright range.First contact and the second contact are contact, but they are not the same contacts.
Term used in the description of this invention herein is intended merely to description specific embodiment, and is not intended to
The limitation present invention.As used in description of the invention and the appended claims, singular "one" (" a ", " an ") and
"the" is intended to be also covered by plural form, unless context is clearly otherwise indicated.It is also understood that institute herein
The term "and/or" used refers to and covers any and complete of one or more projects in the project listed in association
The possible combination in portion.It is also understood that term " includes " (includes " " including " " comprises " and/or
" comprising ") when using in the present specification specify exist stated feature, integer, step, operation, element and/or
Component, but it is not excluded that in the presence of or add other one or more features, integer, step, operation, component, assembly unit and/or it
Grouping.
As used herein, based on context, term " if " can be interpreted to mean " and when ... when " (when " or
" upon ") or " in response to determination " or " in response to detecting ".Similarly, based on context, phrase " if it is determined that " or " if
Detect [condition or event stated] " it can be interpreted to mean " when determination ... " or " in response to determination " or " work as inspection
When measuring [condition or event stated] " or " in response to detecting [condition or event stated] ".
Describe electronic equipment, the associated process for the user interface of such equipment and for using such equipment
Embodiment.In some embodiments, the equipment be also comprising other function such as PDA and/or music player functionality just
The formula communication equipment of taking such as mobile phone.The exemplary embodiment of portable multifunction device includes but is not limited to come from Apple
Inc. (Cupertino, California)iPodWithEquipment.It can also be used other portable
Formula electronic equipment, laptop computer or plate electricity such as with touch sensitive surface (for example, touch-screen display and/or touch tablet)
Brain.It is also understood that in some embodiments, equipment not instead of portable communication device has touch sensitive surface (for example, touching
Touch panel type display and/or touch tablet) desktop computer.
In following discussion, a kind of electronic equipment including display and touch sensitive surface is described.However, should manage
Solution, electronic equipment may include other one or more physical user-interface devices, such as physical keyboard, mouse and/or control stick.
The equipment usually supports various application programs, such as one or more of the following terms: drawing application program is in
Existing application program, word-processing application, website creation application program, disk editor application program, spreadsheet applications,
Game application, telephony application, videoconference application, email application, instant message application program,
Body-building supports application program, photo management application program, digital camera applications program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
The physical user-interface device that the various application programs that can be executed in equipment can be used at least one shared, such as
Touch sensitive surface.One or more functions of touch sensitive surface and the corresponding information being shown in equipment can be from a kind of application program tune
It is whole and/or be changed to a kind of lower application program and/or in corresponding application programs adjust and/or change.In this way, equipment shares
Physical structure (such as touch sensitive surface) can be supported various using journey using intuitive for a user and clear user interface
Sequence.
It attention is directed to the embodiment of the portable device with touch-sensitive display.Fig. 1 is shown according to one
The block diagram of the portable multifunction device 100 with touch-sensitive display 112 of a little embodiments.Touch-sensitive display 112 sometimes for
It is convenient to be referred to as " touch screen ", and it is also referred to as or is called touch-sensitive display system.Equipment 100 may include memory
102 (it may include one or more computer readable storage mediums), Memory Controller 122, one or more processing units
(CPU) 120, peripheral device interface 118, RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, input/output
(I/O) subsystem 106, other inputs or control equipment 116 and outside port 124.Equipment 100 may include one or more
Optical sensor 164.These components can be communicated by one or more communication bus or signal wire 103.
It should be appreciated that equipment 100 is an example of portable multifunction device, and equipment 100 can have than institute
The more or fewer components of the component shown can combine two or more components, or can have the difference of these components to match
It sets or arranges.Various parts shown in Fig. 1 can be implemented with hardware, software or combination thereof, which includes one
A or multiple signal processing circuits and/or specific integrated circuit.
Memory 102 may include high-speed random access memory, and may also comprise nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device or other non-volatile solid state memory equipment.By its of equipment 100
He can be controlled the access of memory 102 component (such as CPU 120 and peripheral device interface 118) by Memory Controller 122
System.
Peripheral device interface 118 can be used for the input peripheral of equipment and output peripheral equipment being couple to CPU
120 and memory 102.The one or more processors 120 operation executes the various software programs of storage in the memory 102
And/or instruction set is to execute the various functions for equipment 100 and handle data.
In some embodiments, peripheral device interface 118, CPU 120 and Memory Controller 122 can be in single cores
It is realized on piece such as chip 104.In some other embodiment, they can be realized on a separate chip.
RF (radio frequency) circuit 108 sends and receivees the RF signal for being also designated as electromagnetic signal.RF circuit 108 turns electric signal
It is changed to electromagnetic signal/by electromagnetic signal and is converted to electric signal, and via electromagnetic signal and communication network and other communication equipments
It is communicated.
Voicefrequency circuit 110, loudspeaker 111 and microphone 113 provide the audio interface between user and equipment 100.Audio
Circuit 110 receives audio data from peripheral device interface 118, audio data is converted to electric signal, and electric signal transmission is arrived
Loudspeaker 111.Loudspeaker 111 converts electrical signals to the audible sound wave of human ear.Voicefrequency circuit 110 is also received by microphone 113
According to sound wave conversion come electric signal.Voicefrequency circuit 110 converts electrical signals to audio data, and audio data is transferred to outside
Peripheral equipment interface 118 is for being handled.Audio data can be retrieved from and/or be transmitted to storage by peripheral device interface 118
Device 102 and/or RF circuit 108.In some embodiments, voicefrequency circuit 110 further includes earphone jack (for example, in Fig. 2
212)。
I/O subsystem 106 is by such as touch screen 112 of the input/output peripheral equipment in equipment 100 and other input controls
Equipment 116 is couple to peripheral device interface 118.I/O subsystem 106 may include display controller 156 and for other input or
Control one or more input controllers 160 of equipment.The one or more input controller 160 is set from other inputs or control
Standby 116 reception electric signal/send other inputs for electric signal or control equipment 116.Other input control apparatus 116 may include
Physical button (for example, pushing button, rocker buttons etc.), dial, slide switch, control stick, click wheel etc..Some
Any one of in alternative embodiment, one or more input controllers 160 can be couple to or be not coupled to the following terms: key
Disk, infrared port, USB port and pointing device such as mouse.The one or more button (for example, 208 in Fig. 2) can wrap
Include increase/reduction button of the volume control for loudspeaker 111 and/or microphone 113.The one or more button can wrap
It includes and pushes button (for example, 206 in Fig. 2).
Touch-sensitive display 112 provides the input interface and output interface between equipment and user.Display controller 156 is from touching
Screen 112 is touched to receive electric signal and/or send electric signal to touch screen 112.Touch screen 112 shows visual output to user.Vision
Output may include figure, text, icon, video and any combination of them (being referred to as " figure ").In some embodiments, one
A little visual outputs or whole visual outputs can correspond to user interface object.
Touch screen 112 has touch sensitive surface, the sensor for receiving input from user based on tactile and/or tactile
Or sensor group.Touch screen 112 and display controller 156 are (with any associated module and/or instruction in memory 102
Collection together) detection touch screen 112 on contact (and any movement or interruption of the contact), and by detected contact turn
It is changed to and is shown in the user interface object (for example, one or more soft keys, icon, webpage or image) on touch screen 112
Interaction.In one exemplary embodiment, the contact point between touch screen 112 and user corresponds to the finger of user.
LCD (liquid crystal display) technology, LPD (light emitting polymer displays) technology or LED (hair can be used in touch screen 112
Optical diode) technology, but other display technologies can be used in other embodiments.Touch screen 112 and display controller 156 can
To utilize any technology and other close sensings in the currently known or later a variety of touch-sensing technologies that will be developed
Device array contacts and its any movement for determining the other elements of one or more points contacted with touch screen 112 to detect
Or interrupt, a variety of touch-sensing technologies are including but not limited to capacitive, resistive, infrared ray and surface acoustic wave skill
Art.In one exemplary embodiment, using projection-type mutual capacitance detection technology, such as from Apple Inc. (Cupertino,
California)iPodWithIt was found that those technologies.
Touch screen 112 can have the video resolution more than 100dpi.In some embodiments, touch screen has about
The video resolution of 160dpi.Any suitable object or additives such as stylus, finger etc. can be used to come and touch for user
112 contact of screen.In some embodiments, user-interface design is used for the mainly contact with based on finger and gesture works, by
It is larger in the contact area of finger on the touchscreen, therefore this may be accurate not as good as the input based on stylus.In some embodiments
In, the rough input based on finger is converted to accurate pointer/cursor position or order to be used to execute user's institute's phase by equipment
The movement of prestige.It can be provided on touch screen 112 via the contact based on finger or based on the position of the contact of stylus and movement
Handwriting input.In some embodiments, the input based on finger or the input based on stylus are rendered into current by touch screen 112
The instant visual feedback of handwriting input, and it is enterprising in writing surface (for example, a piece of paper) using writing implement (for example, pen) offer
The visual effect that row is actually write.
In some embodiments, in addition to a touch, equipment 100 may include for activating or deactivating specific function
Touch tablet (not shown).In some embodiments, touch tablet is the touch sensitive regions of equipment, and the touch sensitive regions are different from touch screen,
It does not show visual output.Touch tablet can be the touch sensitive surface separated with touch screen 112, or the touching formed by touch screen
The extension of sensitive surfaces.
Equipment 100 further includes the electric system 162 for powering for various parts.Electric system 162 may include power pipe
Reason system, one or more power supplys (for example, battery, alternating current (AC)), recharging system, power failure detection circuit, power
Converter or inverter, power status indicator (for example, light emitting diode (LED)) and the life with the electric power in portable device
At, manage and distribute any other associated component.
Equipment 100 may also comprise one or more optical sensors 164.Fig. 1, which is shown, to be couple in I/O subsystem 106
Optical sensor controller 158 optical sensor.Optical sensor 164 may include charge-coupled device (CCD) or complementation
Metal-oxide semiconductor (MOS) (CMOS) phototransistor.Optical sensor 164 is received from environment and is thrown by one or more lens
The light penetrated, and convert light to indicate the data of image.In conjunction with image-forming module 143 (also referred to as camera model), optical sensing
Device 164 can capture still image or video.
Equipment 100 may also include one or more proximity sensors 166.Fig. 1, which is shown, is couple to peripheral device interface 118
Proximity sensor 166.Alternatively, proximity sensor 166 can be couple to the input control in I/O subsystem 106
Device 160 processed.(for example, when user is carrying out when in some embodiments, near the ear that multifunctional equipment is placed in user
When call), proximity sensor closes and disables touch screen 112.
Equipment 100 may also include one or more accelerometers 168.Fig. 1, which is shown, is couple to peripheral device interface 118
Accelerometer 168.Alternatively, accelerometer 168 can be couple to the input controller in I/O subsystem 106
160.In some embodiments, information is being touched based on to the analysis from the one or more accelerometer received data
It is shown with longitudinal view or transverse views on panel type display.Equipment 100 optionally in addition to one or more accelerometers 168 it
Further include outside magnetometer (not shown) and GPS (or GLONASS or other Global Navigation Systems) receiver (not shown), with
In information of the acquisition about the position of equipment 100 and orientation (for example, vertical or horizontal).
In some embodiments, the software component stored in the memory 102 includes operating system 126, communication module
(or instruction set) 128, contact/motion module (or instruction set) 130, figure module (or instruction set) 132, text input module
(or instruction set) 134, global positioning system (GPS) module (or instruction set) 135 and application program (or instruction set) 136.This
Outside, in some embodiments, memory 102 stores handwriting input module 157, as shown in Figure 1 and Figure 3.Handwriting input mould
Block 157 includes handwriting recognition model, and provides handwriting recognition and input function to the user of equipment 100 (or equipment 300).Relatively
The more details of handwriting input module 157 are provided in Fig. 5-Figure 27 and its with description.
Operating system 126 is (for example, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS or embedded operation system
System such as VxWorks) it include for controlling and managing general system task (for example, memory management, the control of storage equipment, electricity
Power management etc.) various software components and/or driver, and be conducive to logical between various hardware componenies and software component
Letter.
Communication module 128 is conducive to be communicated by one or more outside ports 124 with other equipment, and also
Including for handling by the various software components of 124 received data of RF circuit 108 and/or outside port.Outside port 124
(for example, universal serial bus (USB), firewire etc.) is suitable for being directly coupled to other equipment or indirectly by network (example
Such as, internet, Wireless LAN etc.) it is coupled.
Contact/motion module 130 can detect and touch screen 112 (in conjunction with display controller 156) and other touch-sensitive device (examples
Such as, touch tablet or physics click wheel) contact.Contact/motion module 130 includes multiple software components for executing and contacting
The relevant various operations of detection, such as to determine that whether be in contact (for example, detection finger down event), determine whether to deposit
Contact movement and track on entire touch sensitive surface the movement (for example, detecting one or more finger drag events), with
And determine whether contact has terminated (for example, detection digit up event or contact are interrupted).Contact/motion module 130 is from touching
Sensitive surfaces receive contact data.The movement for determining contact point may include the rate (magnitude) of determining contact point, speed (magnitude and side
To) and/or acceleration (change in magnitude and/or direction), the movement of contact point indicated by a series of contact data.These behaviour
Work can be applied to single-contact (for example, a finger contact) or multiple spot and meanwhile contact (for example, " multiple point touching "/it is multiple
Finger contact).In some embodiments, contact/motion module 130 detects the contact on touch tablet with display controller 156.
Contact/motion module 130 can detect the gesture input of user.Different gestures on touch sensitive surface have different connect
Touch pattern.It therefore, can be by detecting specific contact patterns come detection gesture.For example, detection finger Flick gesture includes detection hand
Finger presses event, then (for example, in icon position at position identical with finger down event (or substantially the same position)
The place of setting) detection finger lift and (be lifted away from) event.For another example, detecting that finger gently sweeps gesture on touch sensitive surface includes that detection is in one's hands
Finger, which presses event and then detects one or more finger drag events and subsequently detect finger, lifts and (is lifted away from) event.
The hand that contact/motion module 130 optionally is used to show on touch-sensitive display panel 112 by handwriting input module 157
Write in the handwriting input region of input interface (or touch tablet corresponding with the handwriting input region shown on display 340 in Fig. 3
In 355 region) it is directed at the input of handwritten stroke.In some embodiments, it will be lifted with initial finger down event, final finger
The event of rising, the associated position of the contact during any time between the two, motion path and intensity record are used as writing pencil
It draws.Based on this type of information, handwritten stroke can be rendered over the display as the feedback inputted to user.In addition, can be based on by connecing
The handwritten stroke that touching/motion module 130 is aligned generates one or more input pictures.
Figure module 132 includes for the various known of figure to be rendered and shown on touch screen 112 or other displays
Software component, which includes for changing the component of the intensity of shown figure.Such as this paper institute
With, term " figure " includes any object that can be displayed to user, include without limitation text, webpage, icon (such as
User interface object includes soft key), digital picture, video, animation etc..
In some embodiments, figure module 132 stores tables of data diagram shape ready for use.Each figure can be assigned pair
The code answered.Figure module 132 specifies one or more codes of figure to be shown from receptions such as application programs, necessary
In the case of also receive coordinate data and other graphic attribute data together, and it is aobvious to be output to then to generate screen image data
Show controller 156.
Can be used as the component of figure module 132 text input module 134 provide for various application programs (for example,
Contact person 137, Email 140, IM141, browser 147 and any other application program for needing text input) in input
The soft keyboard of text.In some embodiments, it is for example selected by keyboard optionally by the user interface of text input module 134
It selects and shows and can indicate to call handwriting input module 157.In some embodiments, also provided in pen interface it is identical or
Similar keyboard selection, which is shown, can indicate to call text input module 134.
GPS module 135 determines the position of equipment and provides the information to use in various application programs (for example, mentioning
Supply phone 138 is to be used for location-based dialing, be supplied to camera 143 as picture/video metadata, and is supplied to use
In provide the application program such as weather desktop small routine, local Yellow Page desktop small routine and map of location based service/
Navigation desktop small routine).
Application program 136 may include with lower module (or instruction set) or its subset or superset: contact module 137 is (sometimes
Referred to as address book or contacts list);Phone module 138;Video conference module 139;Email client module 140;I.e.
When message (IM) module 141;Body-building support module 142;For static image and/or the camera model 143 of video image;Figure
As management module 144;Browser module 147;Calendaring module 148;Desktop small routine module 149, which can
Including one or more of the following terms: weather desktop small routine 149-1, stock market desktop small routine 149-2, calculator table
Face small routine 149-3, alarm clock desktop small routine 149-4, dictionary desktop small routine 149-5 and other desktops obtained by user
Small routine and the desktop small routine 149- 6 of user's creation;For making the desktop of the desktop small routine 149-6 of user's creation
Small routine builder module 150;Search module 151;The video that can be made of video player module and musical player module
With musical player module 152;Notepad module 153;Mapping module 154;And/or Online Video module 155.
The example for the other applications 136 that can be stored in memory 102 include other word-processing applications,
Other picture editting's application programs, application program, encryption, the digital rights for drawing application program, application program being presented, supporting JAVA
Benefit management, voice recognition and sound reproduction.
In conjunction with touch screen 112, display controller 156, contact module 130, figure module 132,157 and of handwriting input module
Text input module 134, contact module 137 can be used for management address book or contacts list (for example, being stored in memory
102 or the contact module 137 in memory 370 application program internal state 192 in), comprising: to address book addition one
A or multiple names;One or more names are deleted from address book;By one or more telephone numbers, one or more electronics postals
Part address, one or more physical address or other information are associated with name;Image is associated with name;Name is carried out
Classify and sorts;There is provided telephone number or e-mail address with by phone 138, video conference 139, Email 140 or
IM 141 initiates and/or promotes communication;Etc..
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch screen 112, display controller
156, contact module 130, figure module 132, handwriting input module 157 and text input module 134, phone module 138 can quilts
For inputting character string corresponding with telephone number, one or more telephone numbers in access address book 137, modification by
The telephone number of input dials corresponding telephone number, carries out call and disconnect or hang up when conversing and completing.As above
Described, any one of multiple communication standards, agreement and technology can be used in wireless communication.
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch screen 112, display controller
156, optical sensor 164, optical sensor controller 158, contact module 130, figure module 132, handwriting input module
157, text input module 134, contacts list 137 and phone module 138, video conference module 139 include for according to
The executable instruction of the video conference between user and other one or more participants is initiated, carries out and is terminated in family instruction.
In conjunction with RF circuit 108, touch screen 112, display controller 156, contact module 130, figure module 132, hand-written defeated
Enter module 157 and text input module 134, email client module 140 includes for creating in response to user instruction
Build, send, receive and manage the executable instruction of Email.In conjunction with image management module 144, email client module
140 to be very easy to creation and send the Email with the still image or video image that are shot by camera model 143.
In conjunction with RF circuit 108, touch screen 112, display controller 156, contact module 130, figure module 132, hand-written defeated
Enter module 157 and text input module 134, instant message module 141 includes for inputting character sequence corresponding with instant message
Column modify the character being previously entered, the corresponding instant message of transmission (for example, short message service (SMS) or Multimedia Message is used to take
(MMS) agreement of being engaged in for the instant message based on phone or using XMPP, SIMPLE or IMPS mono- to be used for based on internet
Instant message), receive instant message and check received instant message executable instruction.In some embodiments,
The instant message transferred and/or received may include that figure, photo, audio file, video file and/or MMS and/or enhancing disappear
Other attachmentes supported in breath service (EMS).As used herein, " instant message " refers to the message based on phone (for example, making
The message sent with SMS or MMS) and message Internet-based (for example, disappearing using XMPP, SIMPLE or IMPS transmission
Both breath).
In conjunction with RF circuit 108, touch screen 112, display controller 156, contact module 130, figure module 132, hand-written defeated
Enter module 157, text input module 134, GPS module 135, mapping module 154 and musical player module 146, body-building is supported
Module 142 includes the executable instruction for the following terms: creation fitness program is (for example, have time, distance and/or Ka Lu
In target combustion);It is communicated with body-building sensor (sports equipment);Receive workout sensor data;It calibrates strong for monitoring
The sensor of body;It selects and plays the music for body-building;And display, storage and transmission workout data.
In conjunction with touch screen 112, display controller 156, one or more optical sensors 164, optical sensor controller
158, contact module 130, figure module 132 and image management module 144, camera model 143 include for the following terms can
It executes instruction: capturing still image or video (including video flowing) and store them in memory 102;Modify static map
The characteristic of picture or video;Or still image or video are deleted from memory 102.
In conjunction with touch screen 112, display controller 156, contact module 130, figure module 132, handwriting input module 157,
Text input module 134 and camera model 143, image management module 144 include for arrange, modify (for example, editor) or with
Other modes manipulation, label, delete, present (for example, in digital slide or photograph album) and storage still image and/or
The executable instruction of video image.
In conjunction with RF circuit 108, touch screen 112, display system controller 156, contact module 130, figure module 132, hand
Input module 157 and text input module 134 are write, browser module 147 includes for browsing internet according to user instructions
(attachment and alternative document including searching for, being linked to, receive and show webpage or part thereof and being linked to webpage) is held
Row instruction.
In conjunction with RF circuit 108, touch screen 112, display system controller 156, contact module 130, figure module 132, hand
Write input module 157, text input module 134, email client module 140 and browser module 147, calendaring module
148 include for according to user instructions come create, show, modify and store calendar and data associated with calendar (for example,
Calendar, backlog etc.) executable instruction.
In conjunction with RF circuit 108, touch screen 112, display system controller 156, contact module 130, figure module 132, hand
Input module 157, text input module 134 and browser module 147 are write, desktop small routine module 149 is can be by under user
It carries and the miniature applications program that uses is (for example, weather desktop small routine 149-1, stock market desktop small routine 149-2, calculator table
Face small routine 149-3, alarm clock desktop small routine 149-4 and dictionary desktop small routine 149-5) or miniature answering by user's creation
With program (for example, desktop small routine 149-6 of user's creation).In some embodiments, desktop small routine includes HTML (super literary
This markup language) file, CSS (cascading style sheets) file and JavaScript file.In some embodiments, desktop small routine
Including XML (extensible markup language) file and JavaScript file (for example, Yahoo!Desktop small routine).
In conjunction with RF circuit 108, touch screen 112, display system controller 156, contact module 130, figure module 132, hand
Input module 157, text input module 134 and browser module 147 are write, desktop small routine builder module 150 can be by user
For creating desktop small routine (for example, the part that the user of webpage specifies is gone in desktop small routine).
In conjunction with touch screen 112, display system controller 156, contact module 130, figure module 132, handwriting input module
157 and text input module 134, search module 151 includes searching for the one or more search of matching for according to user instructions
Text, music, sound, image, video in the memory 102 of condition (for example, search term that one or more users specify)
And/or the executable instruction of alternative document.
In conjunction with touch screen 112, display system controller 156, contact module 130, figure module 132, voicefrequency circuit 110,
Loudspeaker 111, RF circuit 108 and browser module 147, video and musical player module 152 include allow user download and
Play back the music recorded stored with one or more file formats (such as MP3 or AAC file) and other audio files
Executable instruction, and for showing, presenting or otherwise play back video (for example, on touch screen 112 or via outer
Portion port 124 connect external display on) executable instruction.In some embodiments, equipment 100 may include that MP3 is played
The function of device, such as iPod (trade mark of Apple Inc.).
In conjunction with touch screen 112, display controller 156, contact module 130, figure module 132,157 and of handwriting input module
Text input module 134, notepad module 153 include being used to create and manage notepad, backlog according to user instructions
Deng executable instruction.
In conjunction with RF circuit 108, touch screen 112, display system controller 156, contact module 130, figure module 132, hand
Input module 157, text input module 134, GPS module 135 and browser module 147 are write, mapping module 154 can be used for root
It received according to user instruction, display, modify and store map and data associated with map (for example, driving route;About
The data in shop or other point-of-interests at or near specific position;And other location-based data).
In conjunction with touch screen 112, display system controller 156, contact module 130, figure module 132, voicefrequency circuit 110,
Loudspeaker 111, RF circuit 108, handwriting input module 157, text input module 134, email client module 140 and clear
Look at device module 147, Online Video module 155 includes instruction, which allows user to access, browsing, receives (for example, passing through stream
Media and/or downloading), playback (such as on the external display connected on the touchscreen or via outside port 124), send
Email with the link to specific Online Video, and otherwise manage one or more file formats such as
H.264 Online Video.In some embodiments, instant message module 141 rather than email client module 140 are used
In the link for being sent to specific Online Video.
Each of above-mentioned identified module and application program correspond to for executing above-mentioned one or more functions
And in the present patent application the method (for example, computer implemented method described herein and other information processing
Method) one group of executable instruction.These modules (i.e. instruction set) need not be implemented as individual software program, process or mould
Block, and therefore each subset of these modules can be combined in various embodiments or otherwise rearrange.One
In a little embodiments, memory 102 can store the subset of above-mentioned identified module and data structure.In addition, memory 102 can be deposited
Store up the other module and data structure being not described above.
In some embodiments, equipment 100 is that the operation of predefined one group of function in the equipment uniquely passes through touching
It touches screen and/or touch tablet is performed equipment.By using touch screen and/or touch tablet as the operation for equipment 100
Main input control apparatus can reduce being physically entered in equipment 100 and control equipment (such as pushing button, dial etc.)
Quantity.
Fig. 2 shows the portable multifunction devices 100 in accordance with some embodiments with touch screen 112.Touch screen can
One or more figures are shown in user interface (UI) 200.In this embodiment, and be described below other implementation
In example, user can by, for example, one or more finger 202 (being not necessarily to scale in the accompanying drawings) or with one or
Multiple stylus 203 (being not necessarily to scale in the accompanying drawings) make gesture on figure to select one or more in these figures
A figure.In some embodiments, occur when user is interrupted with the contact of one or more figures to one or more figures
Selection.In some embodiments, gesture may include one or many taps, one or many gently sweep (from left to right, from dextrad
It is left, up and/or down) and/or the finger that is contacted with equipment 100 rolling (from right to left, from left to right, upwards and/or
Downwards).In some embodiments, the figure will not be selected by inadvertently contacting with figure.For example, when gesture corresponding with selection is
When tap, what is swept above application icon gently sweeps gesture and will not select corresponding application program.
Equipment 100 may also include one or more physical buttons, such as " home " button or menu button 204.Such as preceding institute
It states, menu button 204 can be used for any application program navigate in the one group of application program that can be executed on the appliance 100
136.Alternatively, in some embodiments, menu button is implemented as the soft key in the GUI being shown on touch screen 112.
In one embodiment, equipment 100 includes touch screen 112, menu button 204, for facility switching machine and lock
The pushing button 206 of locking equipment power supply, one or more volume knobs 208, subscriber identity module (SIM) card slot 210, ear
Wheat jack 212, docking/charging external port 124.Pushing button 206 can be used for pressing by pressing lower button and being maintained at button
The predefined period opens/closes equipment in lower state;By by lower button and the past predefined period it
Preceding release button carrys out locking device;And/or unlocking process is unlocked or initiated to equipment.In an alternative embodiment, equipment
100 can also receive the speech input for activating or deactivating certain functions by microphone 113.
Fig. 3 is the block diagram of the exemplary multifunctional equipment in accordance with some embodiments with display and touch sensitive surface.If
Standby 300 need not be portable.In some embodiments, equipment 300 be laptop computer, desktop computer, tablet computer,
Multimedia player device, navigation equipment, educational facilities (such as children for learning toy), game system, telephone plant or control
Equipment (for example, household or industrial controller).Equipment 300 generally includes one or more processing units (CPU) 310, one
Or multiple networks or other communication interfaces 360, memory 370 and one or more communication bus for making these component connections
320.Communication bus 320 may include by system unit interconnection and control system component between communication circuit (sometimes referred to as
Chipset).Equipment 300 includes input/output (I/O) interface 330 with display 340, which is usually touch screen
Display.I/O interface 330 may also include keyboard and/or mouse (or other sensing equipments) 350 and touch tablet 355.Memory
370 include high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices;And
It and may include nonvolatile memory such as one or more disk storage equipments, optical disc memory apparatus, flash memory device
Or other non-volatile solid-state memory devices.Optionally, memory 370 may include being remotely located from one or more CPU 310
One or more storage equipment.In some embodiments, the storage of memory 370 and portable multifunction device 100 (Fig. 1)
Memory 102 in similar program, module and the data structure or their son of the program, module and the data structure that are stored
Collection.In addition, memory 370 is storable in appendage, the mould being not present in the memory 102 of portable multifunction device 100
Block and data structure.For example, the memory 370 of equipment 300 can store graphics module 380, module 382, word processing mould is presented
Block 384, website creation module 386, disk editor module 388 and/or spreadsheet module 390, and portable multifunction device 100
The memory 102 of (Fig. 1) can not store these modules.
Each of above-mentioned identified element in Fig. 3 element can be stored in one or more above-mentioned deposit
In storage device.The module that above-mentioned identified each of module is identified corresponds to one group for executing above-mentioned function
Instruction.Above-mentioned identified module or program (that is, instruction set) need not be implemented as individual software program, process, or module,
And therefore each subset of these modules can be combined in various embodiments or otherwise rearrange.In some realities
It applies in example, memory 370 can store the module of above-mentioned identification and the subset of data structure.In addition, memory 370 can store
The add-on module and data structure that face does not describe.
Fig. 4 show with separated with display 450 (for example, touch-screen display 112) touch sensitive surface 451 (for example,
Tablet computer or touch tablet 355 in Fig. 3) equipment (for example, equipment 300 in Fig. 3) on exemplary user interface.Although
Subsequent many examples will be given with reference to the input on touch-screen display 112 (wherein touch sensitive surface and display merge), but
It is in some embodiments the input on touch sensitive surface that equipment detection is separated with display, as shown in Figure 4.Some
In embodiment, touch sensitive surface (for example, 451 in Fig. 4) has with the main shaft on display (for example, 450) (for example, in Fig. 4
453) corresponding main shaft (for example, 452 in Fig. 4).According to these embodiments, equipment detection is corresponding on display
(for example, in Fig. 4,460 correspond to 468 and 462 corresponds to 470) place and touch sensitive surface 451 to the corresponding position in position
It contacts (for example, 460 in Fig. 4 and 462).In this way, in the aobvious of touch sensitive surface (for example, 451 in Fig. 4) and multifunctional equipment
When showing that device (450 in Fig. 4) separate, user's input for being detected on touch sensitive surface by equipment (for example, contact 460 and 462 with
And their movement) be used to manipulate the user interface on display by the equipment.It should be appreciated that similar method can be used for herein
The other users interface.
It attention is drawn to the hand-written inputting method that can be realized in multifunctional equipment (for example, equipment 100) and use
The embodiment at family interface (" UI ").
Fig. 5 is to show the block diagram of exemplary handwriting input module 157 in accordance with some embodiments, this is exemplary hand-written defeated
Enter module 157 and I/O interface module 500 (for example, the I/O interface 330 in Fig. 3 or I/O subsystem 106 in Fig. 1) to be handed over
Mutually, to provide handwriting input ability in equipment.As shown in Figure 5, handwriting input module 157 include input processing module 502,
Handwriting recognition module 504 and result-generation module 506.In some embodiments, input processing module 502 includes segmentation module
508 and normalization module 510.In some embodiments, result-generation module 506 include radical cluster module 512 and one or
Multiple language models 514.
In some embodiments, input processing module 502 and I/O interface module 500 are (for example, the I/O interface in Fig. 3
I/O subsystem 106 in 330 or Fig. 1) it is communicated, to receive handwriting input from user.It is hand-written via any suitable device
It inputs, the touch tablet 355 in all touch-sensitive display systems 112 and/or Fig. 3 as shown in figure 1 of the suitable device.Handwriting input packet
Include the data for each stroke for indicating that user provides in the predetermined handwriting input region in handwriting input UI.Some
In embodiment, indicate that the data of each stroke of handwriting input include such as following data: starting position and end position, intensity
The contact kept in distribution and handwriting input region is (for example, between user's finger or stylus and the touch sensitive surface of equipment
Contact) motion path.In some embodiments, I/O interface module 500 is to 502 real-time transmission of input processing module and time
The sequence of information and the associated handwritten stroke 516 of spatial information.Meanwhile I/O interface module is also in handwriting input user interface
Handwriting input region in provide handwritten stroke real-time rendering 518 as the visual feedback inputted to user.
In some embodiments, it when receiving the data for indicating each handwritten stroke by input processing module 502, also records
Temporal information associated with multiple continuous strokes and sequence information.It shows for example, the data optionally include with corresponding pen
Draw the presentation direction of shape, size, the storehouse of space saturation degree and stroke along entire handwriting input of each stroke of serial number
Relative tertiary location etc..In some embodiments, input processing module 502 provides the finger for returning to I/O interface module 500
It enables, to render institute on the display 518 of equipment (for example, the display 340 in Fig. 3 or touch-sensitive display 112 in Fig. 1)
Received stroke.In some embodiments, the received stroke of institute is rendered as animation, with provide echographia utensil (for example,
Pen) visual effect of real process write on writing surface (for example, a piece of paper).In some embodiments, optionally permit
Family allowable specified rendered nib style, color, the texture of stroke etc..
In some embodiments, input processing module 502 handles in handwriting input region the stroke currently accumulated with to one
Stroke is distributed in a or multiple recognition units.In some embodiments, each recognition unit corresponds to by handwriting recognition model
The character of 504 identifications.In some embodiments, each recognition unit corresponds to the output to be identified by handwriting recognition model 504
Character or radical.Radical is the recurrent ingredient found in multiple synthesis logographic characters.Synthesizing logographic characters may include
According to two or more radicals of common layout (for example, left-right layout, top-bottom layout etc.) arrangement.In an example, single
A Chinese character " listening " is constructed using the i.e. left radical " mouth " of two radicals and right radical " jin ".
In some embodiments, input processing module 502 distributes the handwritten stroke currently accumulated dependent on segmentation module
Or it is divided into one or more recognition units.For example, segmentation module 508 is appointed when for hand-written character " listening " segmentation stroke
The stroke of the left side cluster of handwriting input is assigned to a recognition unit (that is, for left radical " mouth ") by selection of land, and will be hand-written
The stroke of the right side cluster of input is assigned to another recognition unit (that is, for right radical " jin ").Alternatively, divide module
508 can also be assigned to all strokes single recognition unit (that is, for character " listening ").
In some embodiments, the handwriting input (example that segmentation module 508 will currently be accumulated by several different modes
Such as, one or more handwritten strokes) it is divided into one group of recognition unit, grid 520 is divided with creation.For example, it is assumed that till now
Until a total of nine stroke has been had accumulated in handwriting input region.According to the first segmentation chain of segmentation grid 520 come by stroke
1,2,3 is grouped into the first recognition unit 522, and stroke 4,5,6 is grouped into the second recognition unit 526.According to segmentation
Second segmentation chain of grid 520, all stroke 1-9 are grouped into a recognition unit 526.
In some embodiments, segmentation score is assigned for each segmentation chain, is current hand-written defeated to measure specific segmentation chain
A possibility that correct segmentation entered.In some embodiments, optionally for the factor for the segmentation score for calculating each segmentation chain
It include: the absolute dimension and/or relative size, the stroke relative span on (such as the direction x, y and z) in all directions of stroke
And/or absolutely span, the average value of stroke saturated level and/or variation, with the absolute distance of adjacent stroke and/or it is opposite away from
It is every from, the absolute position of stroke and/or relative position, the order or sequence of entering stroke, the duration of each stroke, input
The average value and/or variation, each stroke of the speed (or rhythm) of a stroke are along intensity distribution of stroke length etc..Some
In embodiment, one or more factors optionally into these factors are using one or more functions or transformation to generate segmentation
The segmentation score of different segmentation chains in grid 520.
In some embodiments, in the segmentation of segmentation module 508 after the received current handwriting input 516 of user institute, point
It cuts module 508 and segmentation grid 520 is transmitted to normalization module 510.In some embodiments, normalization module 510, which is directed to, divides
Cut in grid 520 specified each recognition unit (for example, recognition unit 522,524 and 526) generate input picture (for example,
Input picture 528).In some embodiments, normalization module to input picture execute it is necessary or desired normalization (for example,
Stretching, interception, down-sampling or up-sampling), so as to provide input picture as input to handwriting recognition model 504.Some
In embodiment, each input picture 528 includes the stroke for distributing to a corresponding recognition unit, and is corresponded to by handwriting recognition
The character or radical that module 504 identifies.
In some embodiments, the input picture generated by input processing module 502 does not include associated with each stroke
Any time information, and in the input image only retaining space information (for example, by the pixel in input picture position and
The information that density indicates).Purely the handwriting recognition model of training can be based only upon in terms of the spatial information that sample is write in training
Spatial information carries out handwriting recognition.Thus, handwriting recognition model is unrelated with stroke order and stroke direction, without exhaustion for instruction
The all possible row of all character strokes sequence and stroke direction during white silk in its vocabulary (that is, all output classifications)
Column.In fact, in some embodiments, handwriting recognition module 502, which does not differentiate between, to be belonged to a stroke and belongs in input picture
The pixel of another stroke.
(for example, relative to Figure 25 A- Figure 27) being such as explained in greater detail later, in some embodiments, to pure space
Stroke distributed intelligence derived from some times is reintroduced back in handwriting recognition model, it is only without influencing to improve identification accuracy
Stand on the stroke order and stroke direction of identification model.
In some embodiments, the input picture generated by input processing module 502 for a recognition unit is not and together
The input picture overlapping of any other recognition unit in one segmentation chain.In some embodiments, raw for different recognition units
At input picture can have certain overlappings.In some embodiments, allow there are certain overlappings for knowing between input picture
Not with the handwriting input of rapid style of writing writing style writing and/or including concatenation character (for example, a pen of two adjacent characters of connection
It draws).
In some embodiments, certain normalization is carried out before it is split.In some embodiments, can by same module or
Two or more other modules divide the function of module 508 and normalization module 510 to execute.
In some embodiments, in the input picture 528 for providing each recognition unit to handwriting recognition model 504 as defeated
Fashionable, handwriting recognition model 504 generates output, which is the glossary or vocabulary of handwriting recognition model 504 by recognition unit
The Bu Tong possibility of corresponding output character in (that is, can be by the list for all characters and radical that handwriting recognition model 504 identifies)
Property constitute.As will be explained in more detail later, handwriting recognition model 504 has been had trained to identify a large amount of words in kinds of words
Symbol (for example, at least three kinds encoded by Unicode standard not overlay text).The example of overlay text does not include Latin
Text, Chinese character, Arabic alphabet, Persian, cyrillic alphabet and artificial script such as emoticon character.In some realities
It applies in example, handwriting recognition model 504 generates one or more outputs for each input picture (that is, being directed to each recognition unit)
Character, and the corresponding identification score of each output character distribution is directed to based on the associated level of confidence of character recognition.
In some embodiments, handwriting recognition model 504 generates candidate grid 530 according to segmentation grid 520, wherein will
Each arc in segmentation chain (for example, corresponding to corresponding recognition unit 522,524,526) in segmentation grid 520 expands to
In candidate grid 530 the candidate arcs of one or more (for example, respectively correspond to the arc 532 of corresponding output character, 534,536,
538,540).According to the segmentation corresponding segmentation score of chain below candidate chains and associated with the middle output character of character key
Identification score comes for each candidate chains marking in candidate grid 530.
In some embodiments, handwriting recognition model 504 from the input picture 528 of recognition unit generate output character it
Afterwards, candidate grid 530 is transmitted to result-generation module 506 to generate one or more for the handwriting input 516 currently accumulated
A recognition result.
In some embodiments, result-generation module 506 using radical cluster module 512 by one in candidate chains or
Multiple radicals are combined into precomposed character.In some embodiments, result-generation module 506 uses one or more language models
514 come determine the character key in candidate grid 530 whether be by language model indicate special sound in possibility sequence.One
In a little embodiments, result-generation module 506 is by eliminating specific arc or combining two or more in candidate grid 530
Arc modified candidate's grid 542 to generate.
In some embodiments, result-generation module 506 is based on being modified by radical cluster module 512 and language model 514
The identification score of output character in the character string of (for example, reinforce or eliminate) remains in the candidate grid of amendment to be directed to
Each character string (for example, character string 544 and 546) in 542 generates integrated identification score.In some embodiments
In, result-generation module 506 based on its integrated identification score to the different words that retain in modified candidate grid 542
Symbol sequence is ranked up.
In some embodiments, result-generation module 506 sends sequence near preceding character sequence to I/O interface module 500
Column are as ranked recognition result 548, to show to user.In some embodiments, I/O interface module 500 is in hand
It writes and shows the received recognition result 548 (for example, " China " and " women's headgear ") of institute in the candidate display region of input interface.In some realities
It applies in example, I/O interface module displays for a user multiple recognition results (for example, " China " and " women's headgear "), and user is allowed to select to know
Other result as the text input for related application to be inputted.In some embodiments, I/O interface module is rung
Should in other inputs or user confirm recognition result instruction come automatic input sequencing near preceding recognition result (for example,
" women's headgear ").The efficiency of input interface can be improved near preceding result and provide better user experience by effectively automatically entering sequence.
In some embodiments, result-generation module 506 changes the integrated identification of candidate chains using other factors
Score.For example, in some embodiments, result-generation module 506, which is optionally specific user or multiple user maintenances, most often to be made
The log of character.If having found particular candidate character or character in the list of most-often used character or character string
Sequence, then result-generation module 506 optionally improves the integrated identification score of the particular candidate character or character string.
In some embodiments, handwriting input module 157 is directed to the recognition result shown to user and provides real-time update.Example
Such as, in some embodiments, stroke additional for each of user's input, input processing module 502 optionally work as again by segmentation
The handwriting input of preceding accumulation, and the segmentation grid and input picture provided to handwriting recognition model 504 is provided.Then, hand-written knowledge
Other model 504 optionally corrects the candidate grid provided to result-generation module 506.Thus, result-generation module 506 is optionally
Update the recognition result presented to user.As used in this specification, real-time handwriting recognition refers to immediately or in short-term
Interior (for example, within a few tens of milliseconds to several seconds), the handwriting recognition of handwriting recognition results was presented to user.Real-time handwriting recognition with
Identified off-line (for example, as offline optical character identification (OCR) application in like that) the difference is that, at once initiate knowledge
And with handwriting input is received it is not performed substantially simultaneously identification, rather than in preservation recorded image for retrieving later
Identification is executed at some time after active user's session.It does not need in addition, executing offline character recognition about each stroke
With any time information of stroke order, and therefore do not need to execute segmentation using this type of information.The similar candidate of appearance
Further discriminating between character does not also utilize such temporal information.
In some embodiments, handwriting recognition model 504 is embodied as convolutional neural networks (CNN).Fig. 6, which is shown, to be directed to
The exemplary convolutional neural networks 602 of more training of text training corpus 604, more text training corpus 604 include to be directed to
The writing sample of character in multiple not overlay texts.
As shown in Figure 6, convolutional neural networks 602 include input plane 606 and output plane 608.In input plane 606
Between output plane 608 be multiple convolutional layers 610 (e.g., including the first convolutional layer 610a, zero or more intermediate volume
Lamination (not shown) and last convolutional layer 610n).It is corresponding sub-sampling layer 612 after each convolutional layer 610 (for example, the
One sub-sampling layer 612a, zero or more intermediate sub-sampling layer (not shown) and last sub-sampling layer 612n).In convolutional layer
It after with sub-sampling layer and just before output plane 608 is hidden layer 614.Hidden layer 614 be output plane 608 it
Preceding the last layer.In some embodiments, inner nuclear layer 616 (e.g., including the first inner nuclear layer 616a, in zero or more
Between inner nuclear layer (not shown) and last inner nuclear layer 612n) be inserted into before each convolutional layer 610, to improve computational efficiency.
As shown in Figure 6, input plane 606 receives the input figure of hand-writing recognition unit (for example, hand-written character or radical)
As 614, and the output of output plane 608 indicates that the recognition unit belongs to one group of probability (example of the corresponding other possibility of output class
Such as, neural network is configured as the specific character that output character to be identified is concentrated).The output classification of neural network is as a whole
(or output character collection of neural network) is also referred to as the glossary or vocabulary of handwriting recognition model.Volume as described herein can be trained
Product neural network is with the glossary with tens of thousands of a characters.
When the different layers by neural network are to handle input picture 614, input picture 614 is extracted by convolutional layer 610
The different spaces feature of middle insertion.Each convolutional layer 610 is also referred to as one group of characteristic pattern and serves as selecting in input picture 614
The filter in special characteristic portion out, for being distinguished between the corresponding image of kinds of characters.Sub-sampling layer 612 ensures
More and more large-sized features are captured from input picture 614.In some embodiments, son is realized using maximum pond technology
Sample level 612.Maximum pond layer creates location invariance above bigger local zone, and to the output image of convolutional layer before
The down-sampling that multiple is Kx and Ky is carried out along each direction, Kx and Ky are the sizes of maximum pond rectangle.Maximum pond passes through choosing
Selecting improves the high-quality invariant features for normalizing performance to realize faster rate of convergence.In some embodiments, using its other party
Method realizes sub-sampling.
In some embodiments, after last group of convolutional layer 610n and sub-sampling layer 612n and in output plane
Before 608 is to be fully connected layer i.e. hidden layer 614.Being fully connected hidden layer 614 is multilayer perceptron, is fully connected last
The node in node and output plane 608 in sub-sampling layer 612n.Hidden layer 614 reaches in output layer 608 in logistic regression
Output character in an output character before and obtained in the process from this layer of received output image.
During training convolutional neural networks 602, features in convolutional layer 610 and associated with this feature portion are tuned
Respective weights and weight associated with the parameter in hidden layer 614 so that in training corpus 604 have
Know that the other writing sample error in classification of output class is minimized.Once having trained convolutional neural networks 602 and will be network
In different layers establish the optimized parameter collection of parameter and associated weight, then can be by convolutional neural networks 602 for identification not
The new writing sample 618 of a part of training corpus 604, such as based on from user's received real-time handwriting input of institute and
The input picture of generation.
As described herein, the convolutional neural networks of pen interface are trained using more text training corpus, with
Realize more texts or mixing text handwriting recognition.In some embodiments, training convolutional neural networks are to identify that 30,000 characters arrive
More than the big glossary (for example, all characters encoded by Unicode standard) of 60,000 characters.Most of existing hand-written knowledge
Other system is based on the Hidden Markov Models (HMM) for depending on stroke order.In addition, most of existing handwriting recognition model is
Specific to language, and including tens characters (for example, English alphabet, Greek alphabet, all ten numbers etc.
Character) until the small glossary of thousands of a characters (for example, one group of most common Chinese character).So, as described herein logical
The character of several orders of magnitude more than most of existing system can be handled with identifier.
Some routine hand writing systems may include several handwriting recognition models trained one by one, and each handwriting recognition model is directed to
Language-specific or small size character set are customized.Writing sample is propagated by different identification models, until that can classify.
For example, hand can be provided to a series of connected character recognition models specific to language or the character recognition model specific to text
Sample is write, if cannot finally be classified by the first identification model to handwriting samples, is provided to next identification mould
Type, trial classify to handwriting samples in the glossary of its own.Mode for classification is time-consuming, and memory
Demand can increase sharply with each additional identification model used is needed.
Other existing models need user to specify Preferred Language, and using selected handwriting recognition model come to current defeated
Enter to classify.Such specific implementation not only uses trouble and consumes very big memory, but also cannot be used for identification mixing
Language in-put.It is required that user is unpractical in input hybrid language or mixing text input midway switching language preference.
More character identifiers or universal identification device as described herein solve in the problem above of conventional identification systems extremely
Few some problems.Fig. 7 is to train handwriting recognition module (such as convolutional Neural net for using big more text training corpus
Network) example process 700 flow chart so that next handwriting recognition module can be used to provide for the handwriting input of user
Real-time multilingual says handwriting recognition and more text handwriting recognitions.
In some embodiments, the training of handwriting recognition model is executed on server apparatus, and so rear line is set
It is standby that trained handwriting recognition model is provided.Real-time hand-written knowledge is locally executed to handwriting recognition model option on a user device
Not, without other auxiliary from server.In some embodiments, both training and identification provide on the same device.
For example, server apparatus can receive the handwriting input of user from user equipment, execute handwriting recognition and send out in real time to user equipment
Send recognition result.
In example process 700, at the equipment with memory and one or more processors, which is based on more
Space export feature (for example, feature unrelated with stroke order) Lai Xunlian (702) more texts of text training corpus are hand-written
Identification model.In some embodiments, the space export of more text training corpus is characterized in that (704) are unrelated with stroke order
It and is unrelated with stroke direction.In some embodiments, the training (706) of more text handwriting recognition models is independently of hand-written
Temporal information associated with corresponding stroke in sample.Specifically, by the image normalization of handwriting samples at predetermined
Size, and image does not include any information about each stroke of input to form the sequence of image.In addition, image does not wrap also
Include any information that the direction of image is formed about each stroke of input.In fact, during the training period, being extracted from hand-written image
Feature and be temporarily forming image irrespective of how by each stroke.Therefore, it during identification, does not need related to each stroke
Any time information.Thus, although having delay, unordered stroke and arbitrary stroke direction in handwriting input, identification
Steadily provide consistent recognition result.
In some embodiments, more text training corpus include and the corresponding hand of at least three not characters of overlay text
Write sample.As shown in Figure 6, more text training corpus include the handwriting samples collected from many users.Each handwriting samples
A character corresponding to the corresponding text indicated in handwriting recognition model.In order to train up handwriting recognition model, training language
Material library includes a large amount of writing samples for each character of the text indicated in handwriting recognition model.
In some embodiments, at least three not overlay text include (708) Chinese character, emoticon character and Latin
Text.In some embodiments, more text handwriting recognition models have (710) at least 30,000 output classifications, this 30,000 defeated
Classification indicates 30,000 characters across at least three kinds not overlay texts out.
In some embodiments, more text training corpus include all for what is encoded in Unicode standard
The corresponding writing sample of each character of Chinese character is (for example, the whole of the unified ideograph of all CJK (China, Japan and Korea S.) or big portion
Divide ideograph).Unicode standard defines about 74,000 CJK in total and unifies ideograph.CJK uniformly expresses the meaning text
The basic block (4E00-9FFF) of word includes 20,941 for Chinese and the basic middle text of Japanese, Korean and Vietnamese
Symbol.In some embodiments, more text training corpus include all characters unified in the basic block of ideograph for CJK
Writing sample.In some embodiments, more text training corpus further comprise the writing sample for CJK radical, should
CJK radical can be used for writing one or more compound Chinese characters in configuration aspects.In some embodiments, more text training languages
Material library further comprises the writing sample for the less Chinese character used, is such as unified in ideograph superset in CJK
One or more ideographs in the Chinese character that is encoded.
In some embodiments, more text training corpus further comprise for being encoded by Unicode standard
Latin text in all characters in each character corresponding writing sample.Character in basic latin text includes capitalization
The Latin alphabet and the small letter Latin alphabet, and common various basic symbols and number on standard latin text keyboard.One
In a little embodiments, more text training corpus further comprise extending latin text (for example, the various stresses of the basic Latin alphabet
Form) in character.
In some embodiments, more text training corpus include associated artificial with any natural human language of getting along well
The corresponding writing sample of each character of text.For example, in some embodiments, optionally defining one in emoticon text
Group emoticon character, and writing sample corresponding with each emoticon character is included in more text training corpus.
For example, the heart symbol of Freehandhand-drawing is for the emoticon character in training corpusHandwriting samples.Similarly, Freehandhand-drawing
Smiling face's (for example, two points above upper curved arc) is for the emoticon character in training corpusHandwriting samples.
Other emoticon characters include show different moods (for example, it is happy, sad, angry, embarrassed, surprised, laugh, be sobbing, dejected
Deng), different object and character be (for example, cat, dog, rabbit, the heart, fruit, eyes, lip, gift, flower, candle, the moon, star
Deng) and different movement (for example, shaking hands, kissing, run, dance, jump, sleep, have a meal, date, love, like, voting
Deng) icon classification.In some embodiments, the stroke in handwriting samples corresponding with emoticon character is to form corresponding table
The simplification lines and/or stylized lines of the practical lines of feelings sign character.In some embodiments, each equipment or apply journey
Sequence can use different designs for the same emoticon character.For example, even if received hand-written defeated from two user institutes
Enter essentially identical, but the smiling face's emoticon character presented to female user can also be with smiling face's expression for presenting to male user
Sign character is different.
In some embodiments, more text training corpus further include the writing sample for the character in other texts,
Other texts such as Greece character (e.g., including Greek alphabet and symbol), Cyrillic text, Hebrew's text and according to
Other one or more texts that Unicode standard is encoded.In some embodiments, more text training corpus are included in
At least three kinds of nonoverlapping texts in library include the character in Chinese character, emoticon character and latin text.Middle text
Character in symbol, emoticon character and latin text is natural nonoverlapping text.Many other texts may be at least
It overlaps each other for some characters.For example, it may be possible to can the discovery in many other texts (such as Greece and Cyrillic)
Some characters (for example, A, Z) in latin text.In some embodiments, more text training corpus include Chinese character, Ah
Draw primary text and latin text.In some embodiments, more text training corpus include its of overlapping and/or not overlay text
He combines.In some embodiments, more text training corpus include all characters for being encoded by Unicode standard
Writing sample.
As shown in Figure 7, in some embodiments, in order to train more text handwriting recognition models, the equipment is single to having
The hand-written sample of the single convolutional neural networks of one input plane and single output plane offer (712) more text training corpus
This.The equipment determined using convolutional neural networks (714) handwriting samples space export feature (for example, with stroke order without
The feature of pass) and for space export feature respective weights, with for distinguish indicated in more text training corpus to
The character of few three kinds not overlay texts.The difference of more text handwriting recognition models and conventional more text handwriting recognition models exists
In being trained using all samples in more text training corpus single with single input plane and single output plane
Handwriting recognition model.Train single convolutional neural networks to distinguish all characters indicated in more text training corpus, without
Each sub-network of small subset dependent on respective processing training corpus is (for example, sub-network is respectively directed to the word of specific character
Character used in symbol or identification language-specific is trained).In addition, the single convolutional neural networks of training are to distinguish across a variety of
Not overlay text a large amount of characters rather than the character of several overlay texts, such as latin text and Greece character (for example,
Alphabetical A, B, E, Z with overlapping etc.).
In some embodiments, which is trained to using the space export feature for being directed to more text training corpus
More text handwriting recognition models provide (716) real-time handwriting recognition come the handwriting input for user.In some embodiments, for
It includes when user continues to provide the addition and amendment of handwriting input that the handwriting input at family, which provides real-time handwriting recognition, is user's
The identification output of handwriting input serial update.In some embodiments, real-time handwriting recognition is provided into one for the handwriting input of user
Step includes (718) to the more text handwriting recognition models of user equipment offer, and wherein user equipment receives handwriting input from user, and
Handwriting recognition is being locally executed to handwriting input based on more text handwriting recognition models.
In some embodiments, which provides more to the multiple equipment for not having existing overlapping in its corresponding input language
Text handwriting recognition model, and in each equipment in multiple equipment use more text handwriting recognition models, with for pair
Different language associated with each user equipment carries out handwriting recognition.For example, trained more text handwriting recognition models with
When identifying the character in many different literals and language, language can be inputted for those using same handwriting recognition model in the whole world
Any input language in speech provides handwriting input.It is intended merely to the first of the user inputted using English and Hebrew
Hand identical with the second equipment of another user for being intended merely to be inputted using Chinese and emoticon character can be used in equipment
Identification model is write to provide hand-write input function.The independently installed English handwriting input keyboard of user of the first equipment is not needed
(for example, using specific to the handwriting recognition model of English to realize) and independent Hebrew's handwriting input keyboard (for example,
Realized using specific to Hebraic handwriting recognition model), but can disposably install on the first device identical logical
With more text handwriting recognition models, and for providing hand-write input function for English, Hebrew and providing using two kinds of languages
The Mixed design of speech.In addition, not needing second user installation Chinese hand-writing input keyboard (for example, using specific to Chinese
Handwriting recognition model is realized) and independent emoticon handwriting input keyboard (for example, using specific to emoticon
Handwriting recognition model is realized), but identical general more text handwriting recognition models can be disposably installed on the second device,
And for providing hand-write input function for Chinese, emoticon and providing the Mixed design using two kinds of texts.Using identical
More text handwriting models processing across kinds of words big glossary (for example, being compiled using close to 100 kinds of different texts
The most or all of character of code) practicability of identifier is improved, and do not born significantly in equipment supplier and customer-side
Load.
More text handwriting recognition models and the conventional hand based on HMM are trained using big more text training corpus
Identifying system difference is write, and independent of temporal information associated with each stroke of character.In addition, knowing for more texts
The resource and storage requirement of other system will not linearly increase as the symbol and language covered by more character identification systems increases
Add.For example, the quantity for increasing language means to add the model of another stand-alone training, and deposits in conventional hand writing system
Memory requirements will can at least be doubled to adapt to the ability of the enhancing of hand-written discrimination system.On the contrary, when passing through more text training corpus
Library is needed using additional handwriting samples when training more verbal models, to improve language coverage rate come re -training handwriting recognition mould
Type, and increase the size of output plane, but increased amount is very appropriate.Assuming that more text training corpus include with n kind not
The corresponding handwriting samples with language, and more text handwriting recognition models occupy the memory that size is m, when by language coverage rate
When increasing to N kind language (N > n), the equipment is based on the space export feature of more than second text training corpus come re -training
More text handwriting recognition models, more than the second text training corpus include the second handwriting samples corresponding with N kind different language.
The variation of M/m keeps being basically unchanged within the scope of 1-2, and wherein the variation of N/n is from 1 to 100.Once the more texts of re -training
Handwriting recognition model, the equipment can be provided using more text handwriting recognition models of re -training for the handwriting input of user
Real-time handwriting recognition.
Fig. 8 A- Fig. 8 B shows hand-written for providing more texts in real time in portable user (for example, equipment 100)
The exemplary user interface of identification and input.In Fig. 8 A- Fig. 8 B, user equipment touch-sensitive display panel (for example, touch screen
112) pen interface 802 is shown on.Pen interface 802 includes handwriting input region 804, candidate display region 806
With text input area 808.In some embodiments, pen interface 802 further comprises multiple control elements, wherein
Each control element can be called so that pen interface executes predetermined function.As shown in Figure 8 A, delete button,
Space button (carriage return or Enter button), carriage return button, keyboard shift button are included in handwriting input
In interface.Other control elements are also possible, and are optionally provided in pen interface, to adapt to using hand-written
Every kind of different application of input interface 802.The layout of the different components of pen interface 802 is only exemplary,
And distinct device and different application may be changed.
In some embodiments, handwriting input region 804 is the touch sensitive regions for receiving handwriting input from user.It is hand-written
Continuous contact on touch screen and its associated motion path in input area 804 are registered as handwritten stroke.One
In a little embodiments, at the same position of the contact tracing kept, in handwriting input region 804, visually render by
The handwritten stroke of the facility registration.As shown in Figure 8 A, user provides several writing pencils in handwriting input region 804
It draws, the table including some handwritten Chinese characters (for example, " I very "), some hand-written English letter (for example, " Happy ") and Freehandhand-drawing
Feelings sign character (for example, smiling face).Hand-written character is distributed in multiple rows in handwriting input region 804 (such as two rows).
In some embodiments, candidate display region 806 is hand-written defeated for currently accumulating in handwriting input region 804
Enter to show one or more recognition results (for example, 810 and 812).In general, being shown in the first position in candidate display region
Show sequence near preceding recognition result (for example, 810).As shown in Figure 8 A, since handwriting recognition model as described herein can
Identification includes the character of a variety of not overlay texts of Chinese character, latin text and emoticon character, therefore by identification model
The recognition result (for example, 810) of offer correctly includes Chinese character, English letter and the emoticon indicated by handwriting input
Sign character.It does not need user to stop in the midway for writing input, identification language is switched with selection.
In some embodiments, text input area 808 is the corresponding application programs shown to using pen interface
The region of the text input of offer.As shown in Figure 8 A, text input area 808 is used by notepad application, and is worked as
The preceding text (for example, " America is very beautiful ") shown in text input area 808 is to provide to notepad application
Text input.In some embodiments, cursor 813 indicates the current text input position in text input area 808.
In some embodiments, user can be for example by explicitly selecting input (for example, in shown recognition result
Flick gesture on one recognition result) or imply confirmation input (for example, the Flick gesture or hand-written defeated on " carriage return " button
Enter the double-click gesture in region) select in candidate display region 806 shown specific identification result.As shown in figure 8B,
User has clearly selected sequence most using Flick gesture (as shown by the contact 814 of 810 top of recognition result in Fig. 8 A)
Forward recognition result 810.It is inputted in response to the selection, in the insertion indicated by the cursor 813 in text input area 808
The text of recognition result 810 is inserted at point.As shown in figure 8B, once being had input into text input area 808 selected
The text of recognition result 810, handwriting input region 804 and candidate display region 806 are just removed.Handwriting input region 804
It is now ready for receiving new handwriting input, and candidate display region 806 can be used in showing for new handwriting input now
Show recognition result.In some embodiments, the confirmation input of hint, which to sort, is input into text near preceding recognition result
In input area 808, stops without user and selected and sorted is near preceding recognition result.Design good hint confirmation input
It improves text entry rates and reduces during text is write to user's bring cognitive load.
It (is not shown in Fig. 8 A- Fig. 8 B) in some embodiments, the optionally temporarily display in text input area 808
The sequence of current handwriting input is near preceding recognition result.For example, passing through the tentative input frame around tentative text input
To enter text into the tentative text input shown in region 808 with other text inputs in text input area in vision
On distinguish.Text shown in tentative input frame is not submitted or is supplied to associated application program (for example, account
Application), and when for example correcting current handwriting input in response to user to change sequence near preceding recognition result,
Handwriting input module is automatically updated.
Fig. 9 A- Fig. 9 B is the process for providing the example process 900 of more text handwriting recognitions on a user device
Figure.In some embodiments, as shown in Figure 90 0, user equipment receives (902) more text handwriting recognition models, more texts
Identification model has been directed to the space export feature of more text training corpus (for example, unrelated with stroke order and stroke direction
Feature) it is trained to, which includes and at least three kinds not corresponding handwriting samples of the character of overlay text.?
In some embodiments, more text handwriting recognition models are the single volumes that (906) have single input plane and single output plane
Product neural network, and export feature including space and export the respective weights of feature for space, for distinguishing more texts
The character of at least three kinds indicated in training corpus not overlay text.In some embodiments, more text handwriting recognition models
It is configured to the corresponding input picture of the one or more recognition units identified in handwriting input by (908) to identify character,
And corresponding space export feature for identification is independently of the stroke in corresponding stroke order, stroke direction and handwriting input
Continuity.
In some embodiments, user equipment receives (908) handwriting input from user, which, which is included in, is couple to
The one or more handwritten strokes provided on the touch sensitive surface of user equipment.For example, handwriting input includes about finger or stylus
With the corresponding data of the position and movement of the contact between the touch sensitive surface for being couple to user equipment.It is hand-written defeated in response to receiving
Enter, user equipment exports more text handwriting recognition models that feature is trained to based on the space for being directed to more text training corpus
(912) carry out to provide (910) one or more handwriting recognition results in real time to user.
In some embodiments, when providing a user real-time handwriting recognition results, user equipment is by the hand-written defeated of user
Enter segmentation (914) into one or more recognition units, each recognition unit includes one in customer-furnished handwritten stroke
Or multiple handwritten strokes.In some embodiments, user equipment is according to passing through the touch-sensitive of user's finger or stylus and user equipment
The shape of each stroke that contact between surface is formed, positions and dimensions divide the handwriting input of user.In some implementations
In example, segmentation handwriting input is contemplated by what the contact between user's finger or stylus and the touch sensitive surface of user equipment was formed
The relative ranks of each stroke and relative position.In some embodiments, the handwriting input of user is rapid style of writing writing style, and
And each continuous stroke in handwriting input can all correspond to multiple strokes in the identification character of printing form.In some implementations
In example, the handwriting input of user may include the continuous stroke across multiple identification characters of printing form.In some embodiments,
Divide the handwriting input and generate one or more input pictures, each input picture respectively correspond tos corresponding recognition unit.?
In some embodiments, some input pictures in input picture optionally include some overlaid pixels.In some embodiments, defeated
Entering image not includes any overlaid pixel.In some embodiments, user equipment generates segmentation grid, divides each of grid point
Cutting chain indicates the corresponding manner for dividing current handwriting input.In some embodiments, each arc divided in chain corresponds to currently
Corresponding set of stroke in handwriting input.
As shown in Figure 90 0, user equipment provides the phase of each recognition unit in (914) one or more recognition units
Answer image as the input of more Text region models.For at least recognition unit in one or more recognition units
Speech, user equipment from more text handwriting recognition models obtains (916) at least first output character from the first text and
From at least second output of second text different from the first text.For example, identical input picture may make more texts know
Other model two or more appearances similar output character of the output from different literals is as same input picture
Recognition result.For example, usually similar for " a " alphabetical in latin text and the handwriting input of character in Greece character " α ".This
Outside, usually similar for " J " alphabetical in latin text and the handwriting input of Chinese character " fourth ".Similarly, for emoticon
CharacterHandwriting input may look like the handwriting input for CJK radical " west ".In some embodiments, more texts
Handwriting recognition model generally produces the multiple candidate recognition results for likely corresponding to user's handwriting input, because even for the mankind
For reader, the visual appearance of handwriting input is also difficult to interpret.In some embodiments, the first text is CJK base character
Block, and the second text is the latin text such as encoded by Unicode standard.In some embodiments, the first text is
CJK base character block, and the second text is one group of emoticon character.In some embodiments, the first text is Latin
Word, and the second text is emoticon character.
In some embodiments, user equipment is shown in the candidate display region of the pen interface of user equipment
Both (918) first output characters and the second output character.In some embodiments, based in the first text and the second text
Which one is the corresponding text for being currently installed in soft keyboard on a user device, and user equipment is selectively shown
One of (920) first output characters and the second output character.For example, it is assumed that handwriting recognition model has identified Chinese character
" entering " and Greek alphabet " λ " determine whether user sets in user as the output character for being directed to current handwriting input, user equipment
It is standby to be above mounted with Chinese soft keyboard (for example, using keyboard of spelling input method) or Greek input keyboard.If user equipment
Determination is only mounted with Chinese soft keyboard, then user equipment optionally only shows Chinese character " entering " rather than Greek alphabet to user
" λ " is used as recognition result.
In some embodiments, user equipment provides real-time handwriting recognition and input.In some embodiments, in user couple
Before the recognition result shown to user makes clearly selection or implies selection, user device responsive continues to add or repair in user
Positive handwriting input and serial update (922) are used for one or more recognition results of user's handwriting input.In some embodiments,
In response to each amendment of one or more recognition results, user in the candidate display region of handwriting input user interface to
Family shows (924) corresponding modified one or more recognition results.
In some embodiments, training (926) more text handwriting recognition models are to identify at least three kinds not overlay texts
All characters, this at least three kinds not overlay text include Chinese character, emoticon character and compiled according to Unicode standard
The latin text of code.In some embodiments, this at least three kinds not overlay text include Chinese character, arabian writing and Latin
Text.In some embodiments, more text handwriting recognition models have (928) at least 30,000 output classifications, this at least 30,000
A output classification indicates at least 30 characters across at least three kinds not overlay texts.
In some embodiments, user equipment allows user to input more Character writing inputs, is such as more than one including using
The phrase of the character of kind text.For example, user can continuous writing and receive include using more than one text character it is hand-written
Recognition result identifies language without writing midway stopping with manual switching.For example, user can be in the hand-written of user equipment
More text sentences " your good in Chinese of Hello means is write in input area.", without writing Chinese character " you
It is good " before by input language from English be switched to Chinese or when writing English word " in Chinese " by input language from the Chinese
Language switches back to English.
As described herein, more text handwriting recognition models are used to provide real-time handwriting recognition for the input of user.Some
In embodiment, real-time handwriting recognition is used to provide more Character writing input functions in real time in the equipment of user.Figure 10 A- Figure 10 C
It is the flow chart for providing the example process 1000 of real-time handwriting recognition and input on a user device.Specifically, in real time
Handwriting recognition is unrelated with stroke order on character level, phrase level and sentence level.
In some embodiments, handwriting recognition unrelated with stroke order in character level needs handwriting recognition model for spy
Determine hand-written character and identical recognition result is provided, the sequence without considering each stroke of customer-furnished specific character.
For example, each stroke of Chinese character is usually to be write with particular order.Although mother tongue is the people of Chinese usually in school's quilt
It is trained to and each Chinese character is write with particular order, but many users can use the personalization of aberrant stroke order later
Style and stroke order.In addition, rapid style of writing writing style is highly personalization, and multiple pens of the printing form of Chinese character
It draws and is usually merged into torsion and curved single patterned stroke, and be even connected to character late sometimes.Based on not having
There is the image of the writing sample of temporal information associated with each stroke to train the identification model unrelated with stroke order.Cause
This, identifies independently of stroke order information.For example, for Chinese character " ten ", no matter user's writing level stroke first or
Vertical stroke is write first, and handwriting recognition model will all provide identical recognition result " ten ".
As shown in FIG. 10A, in process 1000, user equipment receives (1002) multiple handwritten strokes from user, this is more
A handwritten stroke corresponds to hand-written character.For example, for character " ten " handwriting input generally include with it is substantially vertical hand-written
The handwritten stroke for the basic horizontal that stroke intersects.
In some embodiments, user equipment generates (1004) input picture based on multiple handwritten strokes.In some realities
It applies in example, user equipment provides (1006) input picture to handwriting recognition model and executes real-time handwriting recognition with classifying hand-written characters,
Wherein handwriting recognition model provides the handwriting recognition unrelated with stroke order.Then, when receiving multiple handwritten strokes, user
Identical first output character of equipment real-time display (1008) (for example, character " ten " of printing form), without consider from
The respective sequence for multiple handwritten strokes (for example, horizontal stroke and vertical stroke) that family receives.
Although it is some routine hand-written discrimination systems by training hand-written discrimination system when particularly including it is such variation come quasi-
Perhaps the small stroke order variation in a small amount of character.Such routine hand-written discrimination system cannot be scaled to adapt to large amount of complex word
Any stroke order variation of such as Chinese character is accorded with, because even being that the character of medium complexity has also led to stroke order
Significant changes.In addition, by only including more permutation and combination for the acceptable stroke order of specific character, it is conventional to identify
System still cannot be handled multiple stroke combinations at single stroke (for example, when being write in a manner of super rapid style of writing) or by one
Stroke is divided into the hand-written defeated of multiple sub- strokes (for example, when using super coarse sampling to entering stroke to capture character)
Enter.Therefore, it is as described herein for space export feature and training more text hand writing systems relative to conventional identification systems have
It is advantageous.
In some embodiments, independently of temporal information associated with stroke each in each hand-written character execution and pen
The unrelated handwriting recognition of picture sequence.In some embodiments, in conjunction with hand-written unrelated with stroke order of stroke distributed intelligence execution
Identification, the stroke distributed intelligence consider the space point of each stroke before each stroke is merged into plane input picture
Cloth.It provides in the description later and reinforces above-mentioned and stroke order on how to use stroke distributed intelligence derived from the time
The more details (for example, relative to Figure 25 A- Figure 27) of unrelated handwriting recognition.Not relative to technology described in Figure 25 A- Figure 27
The stroke order independence of hand-written discrimination system can be destroyed.
In some embodiments, handwriting recognition model provides (1010) handwriting recognition unrelated with stroke direction.Some
In embodiment, the identification unrelated with stroke direction needs user device responsive identical to show in receiving multiple handwriting inputs
First output character, the corresponding stroke side without considering each handwritten stroke in customer-furnished multiple handwritten strokes
To.For example, handwriting recognition model will be defeated if user writes Chinese character " ten " in the handwriting input region of user equipment
Identical recognition result out, regardless of user is from left to right or from right to left writing level stroke.Similarly, no matter user
Vertical stroke is write with the direction of direction from top to bottom still from top to bottom, handwriting recognition model will all export identical identification
As a result.In another example, many Chinese characters are made of in structure two or more radicals.Some Chinese characters are each
From including left radical and right radical, and people usually first write left radical, then write right radical.In some embodiments,
No matter user writes right radical first or writes left radical first, as long as when user completes hand-written character, it is resulting hand-written
Input shows left radical on the left of right radical, and handwriting recognition model will all provide identical recognition result.Similarly, some Chinese
Character respectively includes upper radical and lower radical, and people usually first write upper radical, then writes lower radical.In some implementations
In example, no matter user writes radical first or writes lower radical first, as long as resulting handwriting input shows that radical exists
Above lower radical, handwriting recognition model will all provide identical recognition result.In other words, handwriting recognition model independent of with
Family provides the direction of each stroke of hand-written character to determine the identity of hand-written character.
In some embodiments, do not consider to provide the quantity for the sub- stroke that recognition unit is utilized, hand-written knowledge by user
Other model is all based on the image of recognition unit to provide handwriting recognition.In other words, in some embodiments, handwriting recognition model
(1014) handwriting recognition unrelated with stroke counting is provided.In some embodiments, user device responsive is in receiving multiple hands
Stroke is write to show identical first output character, it is continuous in input picture without considering to be formed using how many handwritten stroke
Stroke.For example, if user writes Chinese character " ten " in handwriting input region, no matter there is provided four strokes by user
(for example, two short horizontal strokes and two short vertical strokes are to constitute cross character) or two strokes (such as L shape stroke
With 7 shape strokes or horizontal stroke and vertical stroke) or any other quantity stroke (for example, several hundred extremely short strokes
Or point) to constitute the shape of character " ten ", handwriting recognition model will all export identical recognition result.
In some embodiments, handwriting recognition model can not only identify that identical character is write without considering per single word
Sequence, direction and the stroke counting of symbol, handwriting recognition model can also identify that multiple characters are customer-furnished without considering
The time sequencing of the stroke of multiple characters.
In some embodiments, user equipment not only receives a handwritten stroke more than first, but also receives (1016) from user
A handwritten stroke more than second, wherein a handwritten stroke more than second corresponds to the second hand-written character.In some embodiments, Yong Hushe
It is standby that (1018) second input pictures are generated based on more than second a handwritten strokes.In some embodiments, user equipment is to hand-written
Identification model provides (1020) second input pictures to execute identification in real time to the second hand-written character.In some embodiments, when
When receiving a handwritten stroke more than second, user equipment real-time display (1022) is corresponding with more than second a handwritten strokes second defeated
Character out.In some embodiments, the second output character and the first output character are shown simultaneously in spatial sequence, with by with
Family provides more than first a handwritten strokes and the respective sequence of more than second a handwritten strokes is unrelated.For example, if user sets in user
Two Chinese characters (for example, " ten " and " eight ") are write in standby handwriting input region, no matter then user's written character first
The stroke of " ten " the still stroke of written character " eight " first, as long as the handwriting input currently accumulated in handwriting input region is shown
Be character " ten " stroke in the stroke left of character " eight ", user equipment just will show recognition result " 18 ".In fact,
If some strokes of user's written character " eight " before some strokes (for example, vertical stroke) of written character " ten "
(for example, left curved stroke), as long as then the gained image of handwriting input shows all strokes of character " ten " in handwriting input region
All on the left of all strokes of character " eight ", user equipment just will show recognition result with the spatial order of two hand-written characters
" 18 ".
In other words, as shown in Figure 10 B, in some embodiments, the sky of the first output character and the second output character
Between sequentially correspond to more than (1024) first a handwritten strokes and more than second a strokes along user equipment pen interface it is silent
Recognize the spatial distribution of presentation direction (for example, from left to right).In some embodiments, a handwritten stroke more than second is at more than first
Temporarily received (1026) after handwritten stroke, and along the default presentation direction of the pen interface of user equipment (such as
From left to right), second output character is in spatial sequence before the first output character.
In some embodiments, handwriting recognition model provides the knowledge unrelated with stroke order in terms of sentence to sentence level
Not.For example, even if hand-written character " ten " is in the first hand-written sentence and hand-written character " eight " is in the second hand-written sentence, and
Other one or more hand-written characters of two hand-written character intervals and/or words in handwriting input region, but handwriting recognition
Model still will provide the recognition result " 18 " for showing two characters in spatial sequence.Customer-furnished two are not considered
The time sequencing of the stroke of a character, when user completes handwriting input, the spatial order of recognition result and two identification characters
Keep identical, on condition that the recognition unit of two characters is spatially arranged according to sequence " 18 ".In some embodiments
In, the first hand-written character (such as " ten ") are provided as the first hand-written sentence (for example, " ten is a number. ") by user
A part, and the second hand-written character (for example, " eight ") are provided as the second hand-written sentence (for example, " eight is by user
Another number. ") a part, and in the handwriting input region of user equipment simultaneously show the first hand-written sentence
With the second hand-written sentence.In some embodiments, when user confirms recognition result (for example, " ten is a number., eight is
Another number. ") when being correct recognition result, two sentences will be input into the text input area of user equipment
In, and handwriting input region will be removed to be used for user and input another handwriting input.
In some embodiments, since handwriting recognition model is not only in character level but also in phrase level and sentence layer
Stroke order is independently of each in grade, therefore user can make school to previously unfinished character after having write successive character
Just.For example, if user continues to forget to write some word before writing one or more successive characters in handwriting input region
The specific stroke of symbol, then user still can write a little later the stroke of loss at the correct position in specific character, to receive
To correct recognition result.
In the conventional identifying system (for example, identifying system based on HMM) depending on stroke order, once it has write
Character, it its be just submitted, and user can no longer make any change to it.If the user desired that making any change, then
User must delete the character and all successive characters, to start all over.In some conventional identification systems, user is needed
Any pen completing hand-written character in short predetermined time window, and being inputted outside predetermined time window
Picture will not be all included in same recognition unit, because providing other strokes during the time window.Such conventional system
System is difficult with and brings many senses of frustration to user.These disadvantages are not had independently of the system of stroke order, and
User can complete the character according to the random order that user is apparently suitble to and at any time section.User can also be in handwriting input
One or more characters are write in interface in succession to be later corrected (for example, addition is one or more the character more early write
Stroke).In some embodiments, user can also independently delete (for example, using later in relation to described in Figure 21 A- Figure 22 B
Method) character write earlier, and the same position in pen interface is written over.
As shown in Figure 10 B- Figure 10 C, a handwritten stroke more than second is spatially along the pen interface of user equipment
Default presentation direction after handwritten stroke a more than first (1028), and the second output character is along the time of pen interface
Select the default presentation direction in display area in spatial sequence after the first output character.User equipment is received from user
(1030) third handwritten stroke, to correct the first hand-written character (that is, the hand-written character formed by more than first a handwritten strokes), the
Three handwritten strokes are temporarily received after handwritten stroke a more than first and more than second a handwritten strokes.For example, user is in hand
It writes and has write two characters (for example, " human body ") in the from left to right spatial sequence in input area.A stroke is formed more than first
Hand-written character " eight ".It may be noted that user is it is desirable that written character " a ", but lose a stroke.A stroke shape more than second
At hand-written character " body ".When user recognizes that he wishes to write " individual " rather than " human body " later, user may simply be word
It accords with and adds a vertical stroke below the stroke of " eight ", and the vertical stroke is assigned to the first recognition unit by user equipment
(for example, the recognition unit for being used for " eight ").User equipment by for the first recognition unit export new output character (for example,
" eight "), wherein new output character will replace the previous output character (for example, " eight ") in recognition result.Such as institute in Figure 10 C
Show, in response to receiving third handwritten stroke, user equipment is opposite with more than first a handwritten strokes based on third handwritten stroke
Propinquity distributes (1032) third handwritten stroke as more than first a handwritten strokes to same recognition unit.In some embodiments
In, user equipment generates the modified input picture of (1034) institute based on more than first a handwritten strokes and third handwritten stroke.With
Family equipment provides the modified input picture of (1036) institute to handwriting recognition model and is known in real time with executing to the modified hand-written character of institute
Not.In some embodiments, user device responsive shows the modified input in (1040) and institute in receiving third handwriting input
The corresponding third output character of image, wherein third output character replaces the first output character and along default presentation direction in space
It is shown simultaneously in sequence with the second output character.
In some embodiments, handwriting recognition module identification is write hand-written defeated on default presentation direction from left to right
Enter.For example, user can the from left to right written character in a line or multirow.In response to handwriting input, handwriting input module according to
It needs that the recognition result including character is presented in a line or multirow in spatial sequence from left to right.If user selects to know
Not as a result, inputting selected recognition result into the text input area of user equipment.In some embodiments, the book of default
Writing direction is from top to bottom.In some embodiments, the presentation direction of default is from right to left.In some embodiments, user
Optionally in selected recognition result and removes default presentation direction is changed to alternative writing side after handwriting input region
To.
In some embodiments, handwriting input module allows user to input multiword symbol in handwriting input region hand-written defeated
Enter, and allow once to delete stroke from the handwriting input of a recognition unit, rather than once deletes pen from all recognition units
It draws.In some embodiments, handwriting input module allows once to delete a stroke from handwriting input.In some embodiments,
The deletion for carrying out recognition unit one by one on the direction opposite with default presentation direction inputs identification list without considering
Member or stroke are to generate the sequence of current handwriting input.In some embodiments, according to the entering stroke in each recognition unit
Reverse order carry out the deletion of stroke one by one, and when having deleted all strokes in a recognition unit, along with it is silent
Recognize the deletion that the opposite direction of presentation direction carries out the stroke of next recognition unit.
In some embodiments, third output character and second are exported in the candidate display region of pen interface
When character is shown as candidate recognition result simultaneously, user equipment receives from user and deletes input.It inputs, is keeping in response to deleting
While third output character in candidate display region in shown recognition result, user equipment deletes the from recognition result
Two output characters.
In some embodiments, as shown in figure 10 c, each handwritten stroke in the handwritten stroke is provided in user
When, a handwritten stroke of user equipment real-time rendering more than (1042) first, more than second a handwritten strokes and third handwritten stroke.One
In a little embodiments, in response to receiving deletion input from user, more than first a handwritten strokes in handwriting input region are being kept
While with the corresponding rendering of third handwritten stroke (for example, common correspond to modified first hand-written character), user equipment
The corresponding rendering of more than (1044) second a handwriting inputs (for example, corresponding to the second hand-written character) is deleted from handwriting input region.
For example, after user provides the vertical stroke lost in character string " individual ", if user, which inputs, deletes input, from
The removal of handwriting input region is directed to the stroke in the recognition unit of character " body ", and from the candidate display region of user equipment
Recognition result " individual " removes character " body ".After deletion, it is retained in handwriting input region for the stroke of character " a ",
And recognition result only shows character " a ".
In some embodiments, hand-written character is more stroke Chinese characters.In some embodiments, a hand-written defeated more than first
Enter is provided with rapid style of writing format write.In some embodiments, a handwriting input more than first is mentioned with rapid style of writing writing style
It supplies, and hand-written character is more stroke Chinese characters.In some embodiments, hand-written character be with rapid style of writing style write Ah
Draw primary text.In some embodiments, hand-written character is other texts write with rapid style of writing style.
In some embodiments, user equipment is established to inputting for hand-written character to the corresponding of one group of acceptable size
Predetermined constraint, and the multiple handwritten strokes that will currently be accumulated based on corresponding predetermined constraint be divided into it is more
A recognition unit wherein corresponding input picture generates from each recognition unit, is provided to handwriting recognition model, and is identified
For corresponding output character.
In some embodiments, user equipment receives after multiple handwritten strokes that segmentation is currently accumulated from user additional
Handwritten stroke.User equipment based on additional handwritten stroke relative to multiple recognition units spatial position come to multiple recognition units
In a corresponding recognition unit distribute additional handwritten stroke.
Focusing on for providing the exemplary user interface of handwriting recognition and input on a user device.In some implementations
In example, exemplary user interface is provided on a user device based on more text handwriting recognition models, more text handwriting recognition moulds
Type provides the real-time and stroke order to user's handwriting input unrelated handwriting recognition.In some embodiments, exemplary use
Family interface is the user interface (for example, shown in Fig. 8 A and Fig. 8 B) of exemplary pen interface 802, the example user
Interface includes handwriting input region 804, candidate display region 804 and text input area 808.In some embodiments, example
Property pen interface 802 further includes multiple control elements 1102, such as delete button, space bar, carriage return button, keyboard shift
Button etc..Other one or more regions and/or element can be provided in pen interface 802 to realize following additional function
Energy.
As described herein, more text handwriting recognition models can have tens of thousands of a characters of many different texts and language
Very big glossary.Thus, for handwriting input, identification model will be very likely to identify a large amount of output character, it
Have sizable possibility be user wish input character.On the user equipment with limited display area, favorably
Be keep other results user request Shi Keyong while initially only provide recognition result subset.
Figure 11 A- Figure 11 G shows the subset for showing recognition result in the normal view in candidate display region, even
With the exemplary user interface that can be indicated of showing of the extended view for calling candidate display region, the extended view is for showing
The rest part of recognition result.In addition, in the extended view in candidate display region, recognition result is divided into it is different classes of, and
It is shown on the different Shipping Options Pages of extended view.
Figure 11 A shows exemplary pen interface 802.Pen interface includes handwriting input region 804, candidate
Display area 806 and text input area 808.One or more control elements 1102 are also included in pen interface
In 1002.
As illustrated in figure 11A, candidate display region 806 optionally includes the area for showing one or more recognition results
Domain and for calling the showing for extended version in candidate display region 806 that can indicate 1104 (for example, extension icons).
Figure 11 A- Figure 11 C is shown provides one or more handwritten stroke (examples in user in handwriting input region 804
Such as, stroke 1106,1108 and 1110) when, user equipment is identified and is shown and the pen currently accumulated in handwriting input region 804
Draw corresponding corresponding set of recognition result.As shown in Figure 11 B, after user inputs the first stroke 1106, user equipment is known
Not and show three recognition results 1112,1114 and 1116 (for example, character "/", " 1 " and ", ").In some embodiments, root
According to recognition confidence associated with each character, a small amount of candidate word is shown in candidate display region 806 in order
Symbol.
In some embodiments, in text input area 808 for example in frame 1118 tentatively show sequence near
Preceding candidate result (for example, "/").User optionally utilize simple confirmation input (for example, press " input " key, or
In handwriting input region provide double-click gesture) come confirm sequence near preceding candidate be desired input.
Figure 11 C is shown before user selected any candidate recognition result, defeated in handwriting input region 804 in user
When entering two more strokes 1108 and 1110, stroke and the initial stroke 1106 together quilt in handwriting input region 804 are added
Rendering, and candidate result is updated, to reflect the variation of the recognition unit identified from the handwriting input currently accumulated.Such as figure
Shown in 11C, these three strokes are based on, user equipment has identified single recognition unit.Based on single recognition unit, user equipment
It has identified and has shown several recognition results 1118-1124.In some embodiments, current aobvious in candidate display region 806
One or more recognition results (for example, 1118 and 1122) in the recognition result shown are respectively indicated from current handwriting input
The similar selected candidate characters of candidate characters of multiple appearances.
As shown in Figure 11 C- Figure 11 D, in user (for example, can indicate that having above 1104 contacts 1126 using showing
Flick gesture) it selects to show that candidate display region becomes extending from normal view (for example, shown in Figure 11 C) when can indicate 1104
View (for example, shown in Figure 11 D).In some embodiments, extended view, which is shown, has been directed to current handwriting input identification
All recognition results (for example, candidate characters).
In some embodiments, the normal view in the candidate display region 806 initially shown only shows corresponding text or language
The most common character called the turn, and extended view shows all times including the character being rarely employed in a kind of text or language
Word selection symbol.The extended view in candidate display region can be designed in different method.Figure 11 D- Figure 11 G is shown according to some implementations
The exemplary design in the extension candidate display region of example.
As shown in Figure 11 D, in some embodiments, the candidate display region 1128 of extension includes that respective class is respectively presented
One or more Shipping Options Pages (for example, page 1130,1132,1134 and 1136) of other candidate characters.Shown in Figure 11 D
Label design allows user to quickly find the character of desired classification, and then finds what its hope inputted in corresponding label page
Character.
In Figure 11 D, what the display of the first Shipping Options Page 1130 had been directed to the handwriting input identification currently accumulated includes commonly used word
Accord with and be of little use all candidate characters of character.As shown in Figure 11 D, Shipping Options Page 1130 includes that the initial candidate in Figure 11 C is aobvious
Show all characters shown in region 806, and several additional characters being not included in initial candidate display area 806
(for example," β ", " towel " etc.).
In some embodiments, the character shown in initial candidate display area 806 only includes from associated with text
One group of conventional characters character (for example, all words in the basic block of the CJK text encoded according to Unicode standard
Symbol).In some embodiments, it further comprises associated with text for extending the character shown in candidate display region 1128
One group of character that is of little use (for example, according to all characters in the extension blocks of the CJK text of Unicode standard code).Some
In embodiment, the candidate display region 1128 of extension further comprises the candidate characters for other texts being of little use from user,
Such as Greece character, arabian writing and/or emoticon text.
In some embodiments, as shown in Figure 11 D, extension candidate display region 1128 includes respectively correspond toing respective class
Other candidate characters are (for example, all characters, rare character, the character from latin text and the word from emoticon text
Symbol) corresponding Shipping Options Page 1130,1132,1134 and 1138.Figure 11 E- Figure 11 G shows user and different Shipping Options Pages may be selected
In each Shipping Options Page to manifest the candidate characters of corresponding classification.Figure 11 E illustrates only corresponding with current handwriting input rare
See character (for example, character of the extension blocks from CJK text).Figure 11 F illustrates only Latin corresponding with current handwriting input
Letter or Greek alphabet.Figure 11 G illustrates only emoticon character corresponding with current handwriting input.
In some embodiments, extension candidate display region 1128 further comprises that one or more is shown and can be indicated, with base
Classified the candidate characters in respective labels page (for example, based on Chinese phonetic spelling, being based on stroke number and base in respective standard
In radical etc.).It is user according to the ability that candidate characters of the standard outside recognition confidence score to each classification are classified
Provide the additional capabilities for quickly finding the expectation candidate characters for text input.
In some embodiments, Figure 11 H- Figure 11 K, which is shown, to be grouped the similar candidate characters of appearance, and initial
The representative character from every group of appearance similar candidates character is only presented in candidate display region 806.Due to as described herein more
Text region model can produce for giving handwriting input almost same good many candidate characters, therefore the identification model cannot
Always a candidate is eliminated as cost using the similar candidate of another appearance.In the equipment with limited display area
On, once show that many appearance similar candidates persons' selects correct character not help user, because of subtle difference
It is not easy to find out, and even if user is it can be seen that desired character, it may be difficult to using finger or stylus come from non-
It is selected in often intensive display.
In some embodiments, in order to solve problem above, the very big candidate characters of user equipment identification mutual similarities
(for example, according to the alphabetic index of appearance similar character or dictionary or certain standard based on image), and they are grouped into phase
In the group answered.In some embodiments, it can be identified from one group of candidate characters for given handwriting input one or more groups of outer
See similar character.In some embodiments, user equipment is identified from the similar candidate characters of multiple appearances in same group
Representative candidate characters, and representative candidate is only shown in initial candidate display area 806.If conventional characters with it is any
Other candidate characters seem not similar enough, then show its own.In some embodiments, as shown in Figure 11 H, with do not belong to
In the different mode of any group of candidate characters (for example, candidate characters 1120 and 1124, " being " and " J ") (for example, in thick line
In frame) show every group of representative candidate characters (for example, candidate characters 1118 and 1122, " a " and " T ").In some implementations
Example in, for select one group representative character standard based on the relative application frequency of candidate characters in the group.In other realities
It applies in example, other standards can be used.
In some embodiments, once showing one or more representative characters to user, user can optionally extend
Candidate display region 806 in extended view to show the similar candidate characters of appearance.In some embodiments, selection is specific
Representative character can produce the extended view with only those candidate characters in selected representative same group of character.
The various designs of extended view for providing appearance similar candidates person are all possible.Figure 11 H- Figure 11 K is shown
One embodiment, wherein by representative candidate characters (for example, representative character 1118) top detect it is preparatory really
Fixed gesture (for example, extension gesture) calls the extended views of representative candidate characters.For calling the preparatory of extended view
Determining gesture (for example, extension gesture) with for selecting the representative character of text input predetermined gesture (for example,
Flick gesture) it is different.
As shown in Figure 11 H- Figure 11 I, user provided above the first representative character 1118 extension gesture (for example,
As two contacts 1138 and 1140 be movable away from one another it is shown) when, extension shows the region of representative character 1118, and
Compared with other candidate characters (for example, " being ") not in same expanded set, enlarged view (such as be respectively amplification frame
1142,1144 and 1146) in present the similar candidate characters of three appearances (for example, " a "," towel ").
As shown in Figure 11 I, when being presented in enlarged view, three appearance similar candidates can be more easily seen in user
Character (for example, " a "," towel ") technicality.If a candidate characters in three candidate characters are expected words
Symbol input, then user for example can show the region of the character by touch to select the candidate characters.Such as institute in Figure 11 J- Figure 11 K
Show, second character shown in selected (utilizing contact 1148) the extended view center 1144 of user (for example,).Make
For response, the insertion point indicated by cursor by selected character (for example,) it is input to text input area 808
In.As shown in Figure 11 K, once having selected character, handwriting input and candidate display area in handwriting input region 804 are just removed
Candidate characters in domain 806 (or the extended view in candidate display region) are to be used for subsequent handwriting input.
In some embodiments, if user does not see the expectation in the extended view of the first representative candidate characters 1142
Candidate characters, then user is optionally using identical gesture to extend other representativenesses shown in candidate display region 806
Character.In some embodiments, the extension that another representative character in candidate display region 806 will be currently presented is extended
View automatically restores to normal view.In some embodiments, user is optionally regarded current extension using gesture is shunk
Figure is restored to normal view.In some embodiments, user can roll candidate display region 806 (for example, from left to right) with aobvious
Expose other sightless candidate characters in candidate display region 806.
Figure 12 A- Figure 12 B is the flow chart of example process 1200, wherein identification is presented in initial candidate display area
As a result the first subset, and the second subset of recognition result is presented in extension candidate display region, extend candidate display region
After being all hidden in view before user specially calls.In example process 1200, the equipment is from multiple handwriting recognition results
Identify that vision similar level is more than the subset of the recognition result of predetermined threshold value for handwriting input.User equipment is then from knowledge
The subset of other result selects representative recognition result, and selected representative knowledge is shown in the candidate display region of display
Other result.Process 1200 is shown in Figure 11 A- Figure 11 K.
As shown in figure 12a, in example procedure 1200, user equipment receives (1202) handwriting input from user.It is hand-written
Input includes in the handwriting input region (for example, 806 in Figure 11 C) of pen interface (for example, 802 in Figure 11 C)
One or more handwritten strokes (for example, 1106,1108,1110 in Figure 11 C) of offer.User equipment is based on handwriting recognition mould
Type identifies (1204) multiple output characters (for example, character shown in Shipping Options Page 1130, Figure 11 C) to be directed to handwriting input.With
Family equipment is based on predetermined classification standard and multiple output characters is divided into (1206) two or more classifications.In some realities
It applies in example, predetermined classification standard determines that (1208) respective symbols are conventional characters or are of little use character.
In some embodiments, user equipment is in the candidate display region of pen interface (for example, shown in Figure 11 C
806) initial views in show (1210) two or more classifications in first category corresponding output character (for example,
Conventional characters), wherein the initial views in candidate display region are with the extended view for calling candidate display region (for example, figure
1128 in 11D) show can indicate that (for example, 1104 in Figure 11 C) are simultaneously provided.
In some embodiments, user equipment receives (1212) user input, so that selection is for calling extended view
Showing can indicate, such as shown in fig. 11C.It is inputted in response to user, user equipment is in the extended view in candidate display region
The first category in two or more classifications that display (1214) is not shown in the initial views in candidate display region previously
Corresponding output character and at least second category corresponding output character, such as shown in Figure 11 D.
In some embodiments, the respective symbols of first category are the characters found in conventional characters dictionary, and the
The respective symbols of two classifications are in the character found in character dictionary that is of little use.In some embodiments, it is based on and user equipment
Associated usage history carrys out the dictionary that dynamic adjusted or updated the dictionary of conventional characters and the character that is of little use.
In some embodiments, user equipment is according to predetermined similarity standard (for example, the word based on similar character
Allusion quotation exports feature based on some spaces) (1216) one group of character visually similar to each other is identified from multiple output characters.
In some embodiments, user equipment is based on predetermined selection criteria (for example, based on history frequency of use) come from one group
Representative character is selected in the similar character of vision.In some embodiments, which is based in the group
Character relative application frequency.In some embodiments, which is based on associated with equipment excellent
Select input language.In some embodiments, based on indicate a possibility that each candidate is the anticipated input of user other because
Usually select representative candidate.For example, these factors include whether candidate characters belong to and be currently installed on a user device
Text or candidate characters in soft keyboard whether one group in language-specific associated with user or user equipment it is most normal
With in character etc..
In some embodiments, user equipment is in the initial views of candidate display region (for example, 806 in Figure 11 H)
Show (1220) representative character (for example, " a "), substitute in this group of vision similar character other characters (for example,
" towel ").In some embodiments, it provides and is visually indicated (for example, selective visual is prominent in the initial views in candidate display region
Show out, the special environment), with indicate each candidate characters whether be representative character in a group it is either no be not in office
Ordinary candidate character in what group.In some embodiments, user equipment is defeated from the predetermined extension of user's reception (1222)
Enter (for example, extension gesture), changes predetermined extension input and be related to the generation shown in the initial views in candidate display region
Table character, such as shown in Figure 11 H.In some embodiments, it is inputted in response to receiving predetermined extension, user
Equipment shows the enlarged view and other one or more words of the representative character in (1224) this group of vision similar character simultaneously
The corresponding enlarged view of symbol, such as shown in Figure 11 I.
In some embodiments, predetermined extension input is on the representative character shown in candidate display region
The extension gesture just detected.In some embodiments, predetermined extension input is shown in candidate display region
The contact of predetermined threshold time is detected and lasted longer than above representative character.In some embodiments, for expanding
The continuous contact of the group is opened up than selecting the Flick gesture of representative character that there is longer threshold duration for text input.
In some embodiments, each representative character with accordingly show and can indicate (for example, corresponding extend button) simultaneously
It has been shown that, to call the extended view of its appearance similar candidates character group.In some embodiments, predetermined extension, which inputs, is
To representative character is associated corresponding shows the selection that can be indicated.
As described herein, in some embodiments, the glossary of more text handwriting recognition models includes emoticon text.Hand
Emoticon character can be identified based on the handwriting input of user by writing input identification module.In some embodiments, handwriting recognition mould
The natural human language of the emoticon character directly identified from the emoticon character of handwriting recognition and expression is presented in block
In character or both words.In some embodiments, handwriting input module identifies natural person based on the handwriting input of user
Character or words in speech like sound, and identified character or words and table corresponding with the character or words that are identified is presented
Both feelings sign characters.In other words, handwriting input module is provided for inputting emoticon character without from handwriting input
Mode of the changing interface to emoticon keyboard.In addition, handwriting input module additionally provide by Freehandhand-drawing emoticon character come
Input the mode of conventional natural language character and words.Figure 13 A- Figure 13 E provide for show input emoticon character and
The exemplary user interface of these different modes of conventional natural language character.
Figure 13 A shows the exemplary pen interface 802 called under chat application.Pen interface
802 include handwriting input region 804, candidate display region 806 and text input area 808.In some embodiments, once
User is satisfied to the textual work in text input area 808, and user can select another participant to current chat session
Sending information works.The conversation history of chat sessions is shown in dialog panel 1302.In this example, user receives
The chat messages 1304 in dialog panel 1302 are shown in (for example, " Happy Birthday”)。
As shown in Figure 13 B, user is that the english word " Thanks " in handwriting input region 804 provides handwriting input
1306.In response to handwriting input 1306, user equipment identifies several candidate recognition results (for example, recognition result 1308,1310
With 1312).Sequence is tentatively had input in the text input area 808 into frame 1314 near preceding recognition result
1303。
As shown in figure 13 c, it after user has inputted hand-written words " Thanks " in handwriting input region 806, uses
Then the patterned exclamation mark with stroke 1316 is drawn in handwriting input region 806 (for example, elongated circle has lower section in family
Ring).User equipment identifies that the additional stroke 1316 forms to come since the accumulation writing pencil in handwriting input region 806
Draw the independent recognition unit for other recognition units that 1306 are previously identified.Based on the recognition unit newly inputted (that is, by stroke
1316 recognition units formed), user equipment identifies emoticon character (for example, patterned using handwriting recognition model
"!").Based on the emoticon character that this is identified, which is presented the first identification knot in candidate display region 806
Fruit 1318 (for example, with it is patterned "!" " Thanks!").It is visually also similar to that newly in addition, user equipment also identifies
The number " 8 " of the recognition unit of input.Based on the number that this is identified, user equipment is presented in candidate display region 806
Second recognition result 1322 (for example, " Thanks 8 ").In addition, based on the emoticon character identified (for example, patterned
"!"), user equipment also identify corresponding with emoticon character ordinary symbol (for example, ordinary symbol "!").Based on this
Identified ordinary symbol is connect, third recognition result 1320 is presented (for example, having in user equipment in candidate display region 806
Conventional "!" " Thanks!").Know at this point, any of candidate recognition result 1318,1320 and 1322 may be selected in user
Not as a result, and being entered into text input area 808.
As shown in Figure 13 D, user continues to provide additional handwritten stroke 1324 in handwriting input region 806.Specifically,
User depicts heart symbol after patterned exclamation mark.In response to new handwritten stroke 1324, user equipment is identified
The handwritten stroke 1324 newly provided forms another new recognition unit.Based on the new recognition unit, user equipment identifies table
Feelings sign characterAnd alternatively, candidate characters of the digital " 0 " as new recognition unit.Based on from new identification list
(the example of candidate recognition result 1326 and 1330 of two updates is presented in these the new candidate characters identified in member, user equipment
Such as, " Thanks" and " Thanks 80 ").In some embodiments, the expression that user equipment is further identified and identified
Sign characterCorresponding one or more ordinary symbol or one or more words (for example, " Love ").Based on needle
The one or more ordinary symbols identified or one or more words, user equipment to the emoticon character identified are in
Existing third recognition result 1328, wherein being known using corresponding one or more ordinary symbols or one or more words to replace
Other one or more emoticon character.As shown in Figure 13 D, in recognition result 1328, normal exclamation mark is utilized
"!" replace expression sign characterAnd expression sign character is replaced using conventional character or words " Love "
As shown in Figure 13 E, a candidate recognition result in the selected candidate recognition result of user is (for example, show mixed
Close writing text " Thanks" candidate result 1326), and selected identification is inputted into text input area 808
As a result text, and it is then forwarded to other participants of chat sessions.Message bubble 1332 is shown in dialog panel 1302
Message-text.
Figure 14 is the flow chart of example process 1400, and wherein user inputs emoticon character using handwriting input.
Figure 13 A- Figure 13 E shows example process 1400 in accordance with some embodiments.
In process 1400, user equipment receives (1402) handwriting input from user.Handwriting input is included in handwriting input
The multiple handwritten strokes provided in the handwriting input region at interface.In some embodiments, user equipment is based on handwriting recognition mould
Type identifies the multiple output characters of (1404) from handwriting input.In some embodiments, output character includes coming from nature
At least the first emoticon character of the text of human language is (for example, patterned exclamation markOr the emoticon in Figure 13 D
Sign character), and at least the first character (for example, character of the words " Thanks " in Figure 13 D).In some implementations
In example, user equipment is shown (1406) recognition result (for example, result 1326 in Figure 13 D), which includes coming from hand
The first emoticon character of the text of the natural human language in the candidate display region of input interface is write (for example, in Figure 13 D
Patterned exclamation markOr emoticon character) and the first character (for example, the words in Figure 13 D
The character of " Thanks "), for example, as shown in Figure 13 D.
In some embodiments, it is based on handwriting recognition model, user equipment identifies (1408) optionally from handwriting input
At least the first semantic primitive (for example, words " thanks "), wherein the first semantic primitive includes can be in corresponding human language
Convey respective symbols, words or the phrase of corresponding semantic meaning.In some embodiments, user equipment identification (1410) with from hand
Write input in identify the first semantic primitive (for example, words " Thanks ") associated second emoticon character (for example,
" handshake " emoticon character).In some embodiments, user equipment is in the candidate display region of pen interface
(1412) second recognition results of middle display (for example, showing " handshake " emoticon character, are then shownWithTable
The recognition result of feelings sign character), which includes at least from the first semantic primitive (for example, words " Thanks ")
Second emoticon character of identification.In some embodiments, the second recognition result of display further comprises and includes at least the
The third recognition result of one semantic primitive (for example, words " Thanks ") is (for example, recognition result " Thanks") show simultaneously
Second recognition result.
In some embodiments, user receives the user for selecting the first recognition result shown in candidate display region
Input.In some embodiments, it is inputted in response to user, user equipment inputs in the text input area of pen interface
The text of selected first recognition result, wherein text includes at least the first emoticon of the text from natural human language
Sign character and the first character.In other words, user be able to use in handwriting input region single handwriting input (nevertheless,
There are also the handwriting inputs for including multiple strokes) input mixing writing text input, without in natural language keyboard and emoticon
It is switched between sign character keyboard.
In some embodiments, handwriting recognition model be directed to including at least three kinds not the character of overlay text it is corresponding
Write sample more text training corpus be trained to, and three kinds not overlay text include emoticon character, Chinese character
With the set of latin text.
In some embodiments, user equipment identification (1414) and the first emoticon directly identified from handwriting input
Character (for example,Emoticon character) corresponding second semantic primitive (for example, words " Love ").In some embodiments
In, user equipment shows (1416) the 4th recognition results (for example, in Figure 13 D in the candidate display region of pen interface
1328), the 4th recognition result include at least from the first emoticon character (such asEmoticon character) identification the
Two semantic primitives (for example, words " Love ").In some embodiments, user equipment display the simultaneously in candidate display region
Four recognition results are (for example, 1328 " Thanks of result!Love ") and the first recognition result (for example, result " Thanks"), such as
Shown in Figure 13 D.
In some embodiments, user equipment allows user by drawing emoticon character to input conventional text.Example
Such as, if user does not know how spelling words " elephant ", user optionally draws in handwriting input region and is used for
The patterned emoticon character of " elephant ", and if handwriting input can be correctly identified as by user equipment
The emoticon character of " elephant ", then optionally words " elephant " is presented also in normal text in user equipment, makees
For a recognition result in the recognition result that is shown in candidate display region.In another example, user can be hand-written defeated
Enter to draw patterned cat in region to substitute and write Chinese character " cat ".If user equipment is provided hand-written defeated based on user
Enter to identify the emoticon character for being used for " cat ", then user equipment is optionally also in candidate recognition result and for " cat "
The Chinese character " cat " that " cat " is indicated in Chinese is presented in emoticon character together.By being directed to identified emoticon character
It is presented normal text, user equipment, which provides, a kind of uses several samples usually associated with well known emoticon character
Formula stroke inputs the alternative means of complicated character or words.In some embodiments, user equipment is stored emoticon
Sign character is with its corresponding normal text in one or more preferred texts or language (for example, English or Chinese) (for example, word
Symbol, words, phrase, symbol etc.) link dictionary.
In some embodiments, vision phase of the user equipment based on emoticon character with the image generated from handwriting input
Emoticon character is identified like property.In some embodiments, in order to identify emoticon character from handwriting input, make
With include handwriting samples corresponding with the natural character of text of human language and with lineup be design emoticon character
The training corpus of both corresponding handwriting samples trains the handwriting recognition model used on a user device.In some implementations
In example, emoticon character relevant to same semantic concept is in the Mixed design for the text with different natural languages
There can be different appearances.For example, being used for when the normal text using a kind of natural language (for example, Japanese) is to present
The emoticon character of the semantic concept of " Love " can be " heart " emoticon character, and utilize another nature
The normal text of language (for example, English or French) when presenting, can be the emoticon character of " kiss ".
As described herein, when executing identification to multiword symbol handwriting input, handwriting input module is in hand-written input area
The handwriting input currently accumulated executes segmentation, and the stroke of accumulation is divided into one or more recognition units.For determining how
Divide handwriting input parameter in a parameter can be in handwriting input region to stroke carry out cluster mode and
The distance between different clusters of stroke.Because people have different writing styles.Some often write very sparse, pen
There is very big distance between picture or between the different piece of same character, and other people often write closely, in stroke or not
It is very small with the distance between character.Even for same user, imperfect due to planning, hand-written character may deviate equilibrium
Appearance, and may tilt, stretch or squeeze in different ways.As described herein, more text handwriting recognition models provide with
The unrelated identification of stroke order, therefore, user can written character or partial characters out of order.Therefore, it is difficult to obtain character
Between handwriting input spatially uniform and balance.
In some embodiments, handwriting input model as described herein provides a kind of for notifying handwriting input for user
Module whether by two adjacent recognition units be merged into single recognition unit or by single recognition unit be divided into two it is independent
The mode of recognition unit.With the help of user, handwriting input module can correct initial segmentation, and generate the desired knot of user
Fruit.
Figure 15 A- Figure 15 J shows some exemplary user interfaces and process, and wherein user provides predetermined nip
Gesture and extension gesture are to modify the recognition unit that user equipment identifies.
As shown in Figure 15 A- Figure 15 B, user has input in the handwriting input region 806 of pen interface 802
Multiple handwritten strokes 1502 (for example, three strokes).User equipment identifies list based on the handwritten stroke 1502 currently accumulated
A recognition unit, and three candidate characters 1504,1506 and 1508 are presented (for example, being respectively in candidate display region 806
" towel ", " in " and " coin ").
Figure 15 C shows initial handwritten stroke 1502 right side of the user in handwriting input region 606 and further has input
Several additional strokes 1510.User equipment determines (for example, size and spatial distribution based on multiple strokes 1502 and 1510)
Stroke 1502 and stroke 1510 should be thought of as two independent recognition units.Division based on recognition unit, user equipment
The input picture of the first recognition unit and the second recognition unit is provided to handwriting recognition model, and obtains two groups of candidate characters.With
Family equipment is then based on the various combination of identified character to generate multiple recognition results (for example, 1512,1514,1516 and
1518).Each recognition result includes the character of the first recognition unit identified and the word of the second recognition unit identified
Symbol.As shown in Figure 15 C, each recognition result in multiple recognition results 1512,1514,1516 and 1518 respectively includes two
A identified character.
In this example, it is assumed that user is it is desirable that be identified as single character for handwriting input, but carelessly hand-written
It is left between the left half (for example, left radical " towel ") and right half (for example, right radical " emitting ") of character (such as " cap ") excessive
Space.After having seen that the result (for example, 1512,1514,1516 and 1518) presented in candidate display region 806, user will
Recognize that current handwriting input is improperly divided into two recognition units by user equipment.Although segmentation can be based on objective mark
Standard, but user is not intended to delete current handwriting input and using the smaller distance left between left half and right half come again
It is secondary to rewrite entire character.
On the contrary, user uses nip hand above two clusters of handwritten stroke 1502 and 1510 as shown in Figure 15 D
It is single to merge into single identification with two recognition units that should identify handwriting input module to the instruction of handwriting input module for gesture
Member.Nip gesture is indicated by two on touch sensitive surface contacts 1520 and 1522 located adjacent one another.
Figure 15 E shows the nip gesture in response to user, and user equipment has modified the handwriting input (example currently accumulated
Such as, stroke 1502 and segmentation 1510), and handwritten stroke is merged into single recognition unit.As shown in Figure 15 E, Yong Hushe
It is standby to provide input picture to handwriting recognition model based on modified recognition unit and new for modified recognition unit acquisition three
Candidate characters 1524,1526 and 1528 (for example, " cap ", " women's headgear " and).In some embodiments, such as institute in Figure 15 E
Show, user equipment optionally adjusts the rendering to the handwriting input in hand-written input area 806, to reduce a left side for handwritten stroke
The distance between cluster and right cluster.In some embodiments, user equipment will not change in response to nip gesture to hand-written
The rendering of handwriting input shown in input area 608.In some embodiments, user equipment is based in handwriting input region
It detected in 806 two while contacting and (being contacted on the contrary with a single) and distinguish nip gesture and entering stroke.
As shown in Figure 15 F, user inputs other two stroke 1530 (that is, word to the handwriting input right being previously entered
Accord with the stroke of " cap ").User equipment determines that the stroke 1530 newly inputted is new recognition unit, and single for newly identified identification
Member identifies candidate characters (such as " son ").What then user equipment was identified by newly identified character (for example, " son ") and earlier
The candidate characters of recognition unit combine, and recognition result several different is presented (for example, knot in candidate display region 806
Fruit 1532 and 1534).
After handwritten stroke 1530, user continue the right of stroke 1530 write more strokes 1536 (for example,
Three other strokes), as shown in Figure 15 G.Due to the horizontal distance very little between stroke 1530 and stroke 1536, user
Equipment determines stroke 1530 and stroke 1536 belongs to the same recognition unit, and provides to handwriting recognition model by 1530 He of stroke
1536 input pictures formed.Handwriting recognition model, which is directed in modified recognition unit, identifies three different candidate characters, and
Handwriting input currently to accumulate generates two modified recognition results 1538 and 1540.
In this example, it is assumed that last two groups of strokes 1530 and 1536 actually will as two independent characters (for example,
" son " and " ± ").It is single to see that two groups of strokes 1530 and 1536 have improperly been combined into single identification by user equipment in user
After member, user, which continues offer extension gesture, should be divided into two independences for two groups of strokes 1530 and 1536 with notifying user equipment
Recognition unit.As shown in Figure 15 H, user makes near stroke 1530 and 1536 contacts 1542 and 1544 twice, then
Two contacts are moved away from each other in a substantially horizontal direction (that is, along default presentation direction).
Figure 15 I shows the extension gesture in response to user, and user equipment corrects the previous of the handwriting input currently accumulated
Segmentation, and stroke 1530 and stroke 1536 are assigned in two continuous recognition units.Based on single for two independent identifications
The input picture that member generates, user equipment identify one or more candidate words for the first recognition unit based on stroke 1530
Symbol, and one or more candidate characters are identified for the second recognition unit based on stroke 1536.User equipment is then based on
The various combination of the character identified generates two new recognition results 1546 and 1548.In some embodiments, Yong Hushe
The standby rendering for optionally modifying stroke 1536 and 1536, to reflect the division for the recognition unit being previously identified.
As shown in Figure 15 J-15K, user selects the time shown in candidate display region 806 (as shown in contact 1550)
One in recognition result candidate recognition result is selected, and selected recognition result (for example, result 1548) is in user circle
It is inputted in the text input area 808 in face.It is candidate after inputting selected recognition result into text input area 808
Display area 806 and handwriting input region 804 are removed, and are ready to show subsequent user's input.
Figure 16 A-16B is the flow chart of example process 1600, and wherein user is using predetermined gesture (for example, folder
Knob gesture and/or extension gesture) how Lai Tongzhi handwriting input module to divide or correct the existing segmentation of current handwriting input.Figure
15J and 15K provides the example of example process 1600 in accordance with some embodiments.
In some embodiments, user equipment receives (1602) handwriting input from user.Handwriting input, which is included in, to be couple to
The multiple handwritten strokes provided in the touch sensitive surface of equipment.In some embodiments, hand of the user equipment in pen interface
Write real-time rendering (1604) multiple handwritten strokes in input area (for example, handwriting input region 806 of Figure 15 A- Figure 15 K).With
Family equipment receives nip gesture input and extension one of gesture input above multiple handwritten strokes, for example, such as Figure 15 D and
Shown in Figure 15 H.
In some embodiments, when receive nip gesture input when, user equipment by using multiple handwritten strokes as
Single recognition unit processes (such as shown in Figure 15 C- Figure 15 E) generate (1606) first identification knots based on multiple handwritten strokes
Fruit.
In some embodiments, when receive extension gesture input when, user equipment by using multiple handwritten strokes as
The two independent recognition units pulled open as extension gesture input are handled (such as shown in Figure 15 G- Figure 15 I) and are based on more
A handwritten stroke generates (1608) second recognition results.
In some embodiments, when generating the corresponding one of the first recognition result and the second recognition result, user equipment
Recognition result generated is shown in the candidate display region of pen interface, such as shown in Figure 15 E and Figure 15 I.
In some embodiments, nip gesture input includes on touch sensitive surface in the region occupied by multiple handwritten strokes
In two bringing together contact simultaneously.In some embodiments, extension gesture input includes on touch sensitive surface by multiple
It is separated from each other in the region that handwritten stroke occupies two while to contact.
In some embodiments, user equipment identifies (for example, 1614) two adjacent identifications from multiple handwritten strokes
Unit.User equipment shows that (1616) include the respective symbols identified from two adjacent recognition units in candidate display region
Initial recognition result (for example, result 1512,1514,1516 and 1518 in Figure 15 C), such as shown in Figure 15 C.?
In some embodiments, shown in response to nip gesture the first recognition result (for example, result 1524,1526 in Figure 15 E or
1528) when, user equipment replaces (1618) initial recognition result using the first recognition result in candidate display region.One
It is defeated that (1620) nip gesture is received in a little embodiments, while user equipment shows initial recognition result in candidate display region
Enter, as shown in Figure 15 D.In some embodiments, it is inputted in response to nip gesture, it is multiple that user equipment renders (1622) again
Handwritten stroke is to reduce the distance between two adjacent recognition units in handwriting input region, such as shown in Figure 15 E.
In some embodiments, user equipment identifies (1624) single recognition unit from multiple handwritten strokes.User sets
It is standby to show that (1626) include the first of the character (for example, " allowing " " happiness ") identified from single recognition unit in candidate display region
Beginning recognition result (for example, result 1538 or 1540 of Figure 15 G).In some embodiments, it is shown in response to extension gesture
When the second recognition result (for example, result 1546 or 1548 in Figure 15 I), user equipment utilizes second in candidate display region
Recognition result (for example, result 1546 or 1548) replaces (1628) initial recognition result (for example, result 1538 or 1540), example
As shown in Figure 15 H- Figure 15 I.In some embodiments, user equipment shows initial recognition result in candidate display region
While receive (1630) extend gesture input, as shown in Figure 15 H.In some embodiments, in response to extending gesture input,
User equipment renders (1632) multiple handwritten strokes again, to increase the first recognition unit of distributing in handwriting input region
First subset of stroke and the distance between the second subset of handwritten stroke for distributing to the second recognition unit, such as Figure 15 H and figure
Shown in 15I.
In some embodiments, stroke is provided in user and recognize that stroke may excessively disperse and can not be based on standard scores
Process is cut come after correctly being divided, user is optionally provided nip gesture immediately and made multiple strokes with notifying user equipment
It is handled for single recognition unit.User equipment can based on existed simultaneously in nip gesture two contact come by nip gesture with
Normal stroke distinguishes.Similarly, in some embodiments, user stroke is provided and recognize stroke may it is excessively intensive and
After can not correctly being divided based on Standard Segmentation process, user optionally provides extension gesture immediately to notify user to set
It is standby to be handled multiple strokes as two independent recognition units.User equipment can be based on existing simultaneously two in nip gesture
Contact will extend gesture and distinguish with normal stroke.
In some embodiments, optionally using nip gesture or extend gesture the direction of motion come under gesture how
Divide stroke and additional guidance is provided.For example, two contacts exist if enabling multirow handwriting input for handwriting input region
Two recognition units that the nip gesture that Vertical Square moves up can notify handwriting input module that will be identified in two adjacent rows
It is merged into single recognition unit (for example, as upper radical and lower radical).Similarly, two contacts are moved in vertical direction
Extension gesture can notify handwriting input module that single recognition unit is divided into two recognition units in two adjacent rows.Some
In embodiment, nip gesture and extension gesture can also provide segmentation guidance in the subdivision of character input, such as in compound
Merge two subassemblies in the different piece (for example, upper part, lower part, left half or right half) of symbol or divides precomposed character
(scolding, slate cod croaker, accidentally, camphane is prosperous;Deng) in single component.This is particularly useful to the complicated compound Chinese character of identification, because of user
Often correct ratio and balance are lost in the precomposed character of hand-written complexity.For example, can pass through after completing handwriting input
Nip gesture and extension gesture adjust the ratio of handwriting input and balance is particularly useful to user and inputs correct character, without
It makes and attempting several times to realize correct ratio and balance.
As described herein, handwriting input module allows user to input multiword symbol handwriting input, and allows handwriting input region
In character in, between multiple characters, and even more than the pen of the multiword symbol handwriting input between phrase, sentence and/or row
It draws unordered.In some embodiments, handwriting input module also provides character deletion one by one in handwriting input region, wherein character
The sequence of deletion is with presentation direction on the contrary, and unrelated with the stroke of each character when is provided in handwriting input region.?
In some embodiments, each recognition unit in handwriting input region is optionally executed to stroke one by one (for example, character or word
Root) deletion, wherein being deleted according to the opposite time sequencing for providing stroke in recognition unit.Figure 17 A- Figure 17 H
It shows for being made a response to deletion from the user input and providing character deletion one by one in multiword symbol handwriting input
Exemplary user interface.
As shown in figure 17 a, user provides multiple hands in the handwriting input region 804 of pen interface 802
Write stroke 1702.Based on the stroke 1702 currently accumulated, three recognition results are presented in user equipment in candidate display region 806
(for example, result 1704,1706 and 1708).As shown in Figure 17 B, user provides additional more in handwriting input region 806
A stroke 1710.User equipment identifies three new output characters, and utilizes three new recognition results 1712,1714 and 1716
To replace three previous recognition results 1704,1706 and 1708.In some embodiments, as shown in Figure 17 B, that is, it uses
Family equipment identifies two independent recognition units (for example, stroke 1702 and stroke 1710), pen from current handwriting input
Draw any of character that 1710 cluster will cannot also correspond well in the glossary of handwriting recognition module.Thus, for
The candidate characters (for example, " mu ", " act of violence ") identified in recognition unit including stroke 1710 all have lower than predetermined threshold
The recognition confidence of value.In some embodiments, partial recognition result (for example, result 1712) is presented in user equipment, only wraps
The candidate characters (for example, " day ") for the first recognition unit are included, without including knowing for second in candidate display region 806
Any candidate characters of other unit.In some embodiments, user equipment also shows to include the candidate for being directed to two recognition units
The complete recognition result (for example, result 1714 or 1716) of character, regardless of whether recognition confidence is more than predetermined
Threshold value.There is provided partial recognition result notifies user which part handwriting input needs is modified.In addition, user also may be selected first
The part being correctly validated for inputting handwriting input, then rewrites the part not identified correctly.
Figure 17 C, which shows user, to be continued to provide additional handwritten stroke 1718 to the left of stroke 1710.Based on stroke 1718
Relative position and distance, it is single that user equipment determines that newly added stroke belongs to identical with the cluster of handwritten stroke 1702 identification
Member.Based on modified recognition unit, identified new character (for example, " electricity ") for the first recognition unit, and generate one group it is new
Recognition result 1720,1722 and 1724.Equally, the first recognition result 1720 is partial recognition result, because being directed to stroke 1710
No one of candidate characters of identification candidate characters meet predetermined confidence threshold value.
Figure 17 D shows user and inputs multiple new strokes 1726 between stroke 1702 and stroke 1710 now.User sets
It is standby that the stroke 1726 newly inputted is distributed to recognition unit identical with stroke 1710.Now, user is completed for two Chinese
Character (for example, " computer ") shows that correctly identification is tied in candidate display region 806 to input all handwritten strokes
Fruit 1728.
Figure 17 E shows user and for example deletes input by making light contact 1730 in delete button 1732 to input
Initial part.If user keeps contacting with delete button 1732, user's energy (or can identify one by one character by character
Unit) delete current handwriting input.Asynchronously deletion is executed for all handwriting inputs.
In some embodiments, when the finger of user touches the delete button 1732 on touch sensitive screen first, relative to
Other one or more recognition units of display simultaneously visually highlight (for example, prominent aobvious in handwriting input region 804
Show boundary 1734, or highlight background etc.) last recognition unit in default presentation direction (for example, from left to right) (for example,
For the recognition unit of character " brain "), as shown in Figure 17 E.
In some embodiments, when user equipment detects that user is kept in contact 1730 in delete button 1732 and is more than
When threshold duration, user equipment removes highlighted recognition unit (for example, frame 1734 from handwriting input region 806
In), as shown in Figure 17 F.In addition, user equipment also corrects the recognition result shown in candidate display region 608, to delete base
In any output character that the recognition unit of deletion generates, as shown in Figure 17 F.
If Figure 17 F, which is also shown, is deleting the last recognition unit in handwriting input region 806 (for example, for word
Accord with " brain " recognition unit) after the user continue 1730 are kept in contact in delete button 1732, then with deleted identification
The adjacent recognition unit of unit (for example, the recognition unit for being directed to character " electricity ") becomes next recognition unit to be deleted.
As shown in Figure 17 F, remaining recognition unit becomes visually highlighted recognition unit (for example, in frame 1736), and
It is ready to be deleted.In some embodiments, if visually highlight recognition unit provide user continue with delete by
Button is kept in contact and the preview of recognition unit that can be deleted.If user is interrupted and is deleted before reaching threshold duration
The contact of button then highlights from last recognition unit removal vision, and does not delete the recognition unit.The skill of this field
, it will be recognized that after deleting recognition unit every time, the duration of contact is reset art personnel.In addition, in some embodiments
In, optionally threshold value is adjusted using contact strength (for example, user applies pressure used in the contact 1730 with touch sensitive screen)
Duration, to confirm the intention for the user for deleting current highlighted recognition unit.Figure 17 F and Figure 17 G show
User has interrupted the contact 1730 in delete button 1732 before reaching threshold duration, and is used for the identification of character " electricity "
Unit is retained in handwriting input region 806.When user's selected (for example, as indicated by contact 1740) is single for identification
When the first recognition result (for example, result 1738) of member, by the text input in the first recognition result 1738 to input text area
In domain 808, as shown in Figure 17 G- Figure 17 H.
Figure 18 A- Figure 18 B is the flow chart of example process 1800, and wherein user equipment mentions in multiword symbol handwriting input
For the deletion of character one by one.In some embodiments, have confirmed that and in text input area to user interface input from hand
The deletion of handwriting input is executed before writing the character identified in input.In some embodiments, the character in handwriting input is deleted
Be carried out according to the opposite spatial order of the recognition unit identified from handwriting input, and with formed recognition unit when
Between sequence it is unrelated.Figure 17 A- Figure 17 H shows example process 1800 in accordance with some embodiments.
As shown in figure 18, in example process 1800, user equipment receives (1802) handwriting input from user, should
Handwriting input includes the multiple hands provided in the handwriting input region (for example, region 804 of Figure 17 D) of pen interface
Write stroke.User equipment identifies that (1804) multiple recognition units, each recognition unit include multiple hands from multiple handwritten strokes
Write the respective subset of stroke.For example, the first recognition unit includes stroke 1702 and 1718, and second knows as shown in Figure 17 D
Other unit includes stroke 1710 and 1726.It includes the respective symbols identified from multiple recognition units that user equipment, which generates (1806),
More character identification results (for example, result 1728 in Figure 17 D).In some embodiments, user equipment is in handwriting input circle
More character identification results (for example, result 1728 of Figure 17 D) is shown in the candidate display region in face.In some embodiments, exist
When showing more character identification results in candidate display region, user equipment receives (1810) from user and deletes input (for example, deleting
Contact 1730 on button 1732), as shown in Figure 17 E.In some embodiments, input, user are deleted in response to receiving
Equipment is from the more character identification results shown in candidate display region (for example, candidate display region 806) (for example, result
1728) (1812) end character (for example, the character " brain " for appearing in spatial sequence " computer " end) is removed, such as such as Figure 17 E-
Shown in Figure 17 F.
In some embodiments, when providing multiple handwritten strokes in real time by user, user equipment is in pen interface
Handwriting input region in real-time rendering (1814) multiple handwritten strokes, such as shown in Figure 17 A- Figure 17 D.In some realities
It applies in example, deletes input in response to receiving, user equipment is from handwriting input region (for example, the handwriting input region in Figure 17 E
804) remove (1816) multiple handwritten strokes respective subset, the respective subset of multiple handwritten stroke with by handwriting input region
In multiple recognition units formed spatial sequence in end recognition unit (for example, the identification comprising stroke 1726 and 1710
Unit) it is corresponding.End recognition unit corresponds to the end word in more character identification results (for example, result 1728 in Figure 17 E)
It accords with (for example, character " brain ").
In some embodiments, end recognition unit do not include in (1818) customer-furnished multiple handwritten strokes when
Between upper last handwritten stroke.For example, if user provides stroke 1718 after it provides stroke 1726 and 1710,
Still the end recognition unit including stroke 1726 and 1710 is deleted first.
In some embodiments, the initial part of input is deleted in response to receiving, user equipment is visually by end
Recognition unit and other recognition units identified in handwriting input region distinguish (1820), such as shown in Figure 17 E.
In some embodiments, the initial part for deleting input is detected in the delete button of (1822) in pen interface
Initial contact, and will initially contact be continued above predetermined threshold amount of time when, detect deletion input.
In some embodiments, end recognition unit corresponds to handwritten Chinese character.In some embodiments, handwriting input
It is to be write with rapid style of writing writing style.In some embodiments, handwriting input is multiple corresponding to being write with rapid style of writing writing style
Chinese character.In some embodiments, two at least one handwritten stroke in handwritten stroke is divided into multiple recognition units
A adjacent recognition unit.For example, the long stroke extended in multiple characters can be used in user sometimes, and in such situation
Under, long stroke is optionally divided into several recognition units by the segmentation module of handwriting input module.It (or is identified one by one in character one by one
Unit) when executing handwriting input and deleting, a section of long stroke is once only deleted (for example, the area in corresponding recognition unit
Section).
In some embodiments, deleting input is kept in delete button that (1824) provide in pen interface
Contact, and the respective subset for removing multiple handwritten strokes further comprises according to by the subset of user's offer handwritten stroke
The reverse order of time sequencing carrys out the son for removing to stroke the handwritten stroke in the recognition unit of end from handwriting input region one by one
Collection.
In some embodiments, it includes the respective symbols identified from multiple recognition units that user equipment, which generates (1826),
The partial recognition result of subset, wherein each character in the subset of respective symbols meets predetermined confidence threshold value, example
As shown in Figure 17 B and Figure 17 C.In some embodiments, user equipment is in the candidate display region of pen interface
Show (1828) partial recognition result (for example, in Figure 17 B simultaneously with more character identification results (for example, result 1714 and 1722)
Result 1712 and Figure 17 C in result 1720).
In some embodiments, partial recognition result does not include at least end character in more character identification results.One
In a little embodiments, partial recognition result does not include at least original character in more character identification results.In some embodiments, portion
Dividing recognition result does not include at least intermediate character in more character identification results.
In some embodiments, the minimum unit of deletion is radical, and whenever radical is precisely to remain in hand
The last recognition unit in the handwriting input in input area is write, a radical of handwriting input is once deleted.
As described herein, in some embodiments, user equipment provides horizontal write mode and vertical writing mode.One
In a little embodiments, user equipment allows user in horizontal write mode in from left to right presentation direction and from right to left direction
One of or both upper input text.In some embodiments, user equipment allows user in vertical writing mode from upper
One or both of direction is upper to lower presentation direction and from top to bottom inputs text.In some embodiments, user equipment exists
There is provided in user interface it is various show can indicate (for example, write mode or presentation direction button), with for current handwriting input come
Call corresponding write mode and/or presentation direction.In some embodiments, the text input direction in text input area is silent
Recognize identical as the handwriting input direction on handwriting input direction.In some embodiments, user equipment allows user's manual setting
The presentation direction in input direction and handwriting input region in text input area.In some embodiments, candidate display area
Text display direction default in domain is identical as the handwriting input direction in handwriting input region.In some embodiments, user
Equipment allow user's manual setting text input area in text display direction, and with it is hand-written defeated in handwriting input region
It is unrelated to enter direction.In some embodiments, user equipment by the write mode of pen interface and/or presentation direction with it is corresponding
Apparatus orientation it is associated, and the variation automatic trigger write mode of apparatus orientation and/or the variation of presentation direction.Some
In embodiment, the variation of presentation direction automatically leads into text input area input sequencing near preceding recognition result.
Figure 19 A- Figure 19 F shows the exemplary of the user equipment of both the horizontal input pattern of offer and vertical input pattern
User interface.
Figure 19 A shows the user equipment in horizontal input pattern.In some embodiments, it is in laterally in user equipment
When orientation, horizontal input pattern is provided, as shown in figure 19.In some embodiments, in machine-direction oriented middle operation equipment,
It is optionally associated with horizontal input pattern and the horizontal input pattern is provided.In different applications, apparatus orientation and writing
Association between mode can be different.
In horizontal input pattern, user can provide hand-written character (for example, the writing side of default on horizontal presentation direction
To from left to right, or default presentation direction is from right to left).In horizontal input pattern, user equipment will along horizontal presentation direction
Handwriting input is divided into one or more recognition units.
In some embodiments, user equipment is only allowed in progress uniline input in handwriting input region.In some implementations
In example, as shown in figure 19, user equipment allows to carry out multirow input (for example, two rows input) in handwriting input region.?
In Figure 19 A, user provides multiple strokes in handwriting input region 806 in several rows.It is had been provided based on user multiple hand-written
Relative position and distance between the sequence of stroke and multiple handwritten strokes, user equipment determine that user has inputted two row words
Symbol.After handwriting input to be divided into two independent rows, equipment determines one or more recognition units in every row.
As shown in figure 19, user equipment is identified for each recognition unit identified in current handwriting input 1902
Respective symbols, and generate several recognition results 1904 and 1906.As further shown in Figure 19 A, in some embodiments, if
It is excellent for the output character (for example, alphabetical " I ") of the recognition unit (for example, the recognition unit formed by initial stroke) of specific group
First grade is lower, then user equipment optionally generates the partial recognition result for only showing the output character with abundant recognition confidence
(for example, result 1906).In some embodiments, user may recognize correct or independently delete from partial recognition result 1906
The first stroke is removed or rewritten, correct recognition result is generated with model for identification.In this particular instance, it is not necessary to edit first
Recognition unit, because the first recognition unit 1904 shows the expectation recognition result for the first recognition unit really.
In this example, as shown in Figure 19 A- Figure 19 B, user rotates to equipment machine-direction oriented (for example, in Figure 19 B
It is shown).In response to the variation of apparatus orientation, pen interface is become into vertical input pattern from horizontal input pattern, is such as schemed
Shown in 19B.In vertical input pattern, handwriting input region 804, candidate display region 806 and text input area 808
Layout can with it is different shown in horizontal input pattern.The specified arrangement of horizontal input pattern and vertical input pattern is variable
Change, to adapt to different device shaped and application demand.In some embodiments, it is rotated in apparatus orientation and input pattern becomes
In the case where change, user equipment is from input sequencing in trend text input area 808 near preceding result (for example, result 1904)
As text input 1910.The variation of input pattern and presentation direction is also reflected in the orientation of cursor 1912 and position.
In some embodiments, touching the selection of specific input pattern by user and showing can indicate that 1908 is defeated optionally to trigger
Enter the variation of mode.In some embodiments, input pattern selection show can indicate be also show current write mode, currently
Presentation direction and/or current paragraph direction graphical user interface elements.In some embodiments, input pattern selection is shown
It can indicate recycle between all available input patterns and presentation direction that pen interface 802 provides.In Figure 19 A
Shown, 1908 can be indicated to show current input pattern be horizontal input pattern by showing, wherein presentation direction from left to right, and section
Fall direction from top to bottom.In fig. 19b, 1908 can be indicated to show current input pattern be vertical input pattern, wherein book by showing
Write direction from top to bottom, and paragraph direction is from right to left.According to various embodiments, other groups of presentation direction and paragraph direction
It is also possible for closing.
As shown in figure 19 c, user has inputted multiple new pens in vertical input pattern in handwriting input region 804
Draw 1914 (for example, the handwritten strokes for being directed to two Chinese characters " dawn in spring ").Handwriting input is write on vertical writing direction
's.Handwriting input in vertical direction is divided into two recognition units by user equipment, and is shown and be respectively included in vertical direction
Two recognition units 1916 and 1918 of two identification characters of upper arrangement.
Figure 19 C- Figure 19 D is shown in shown recognition result (for example, the result 1916) of user's selection, vertical
Selected recognition result is input in text input area 808 on direction.
Figure 19 E- Figure 19 F shows the additional row that user has inputted handwriting input 1920 on vertical writing direction.These
The paragraph direction that row is write according to Conventional Chinese character from left to right extends.In some embodiments, candidate display region 806 is gone back
Shown on presentation direction identical with handwriting input region and paragraph direction recognition result (for example, result 1922 and
1924).In some embodiments, the soft key that can be installed according to dominant language associated with user equipment or on a user device
The language (for example, Arabic, Chinese, Japanese, English etc.) of disk, default provide other presentation directions and paragraph direction.
Figure 19 E- Figure 19 F is shown when user selected recognition result (for example, result 1922), by selected identification
As a result text input is into text input area 808.Current text as shown in Figure 19 F, in text input area 808
Therefore input includes the presentation direction write in horizontal pattern text from left to right and the writing write in vertical mode
The text of direction from top to bottom.The paragraph direction of horizontal text be from top to bottom, and the paragraph direction of vertical text be from the right side to
It is left.
In some embodiments, user equipment allows user to be directed to handwriting input region 804,806 and of candidate display region
Preferred presentation direction, paragraph direction are independently established in each of text input area 808.In some embodiments, it uses
Family equipment allows user for each of handwriting input region 804, candidate display region 806 and text input area 808
Preferred presentation direction and paragraph direction are established, independently with associated with every kind of apparatus orientation.
Figure 20 A- Figure 20 C is the example process in the text input direction and handwriting input direction for changing user interface
2000 flow chart.Figure 19 A- Figure 19 F shows process 2000 in accordance with some embodiments.
In some embodiments, user equipment determines the orientation of (2002) equipment.It can be by the accelerometer in user equipment
And/or other orientation sensing elements carry out the variation of the orientation and apparatus orientation of detection device.In some embodiments, user equipment
First orientation is according to equipment, and (2004) pen interface is provided in the equipment in horizontal input pattern.Along horizontal book
It writes direction and the corresponding a line handwriting input inputted in horizontal input pattern is divided into one or more corresponding recognition units.?
In some embodiments, equipment according to equipment be in second orientation in vertical input pattern equipment on provide (2006) it is hand-written
Input interface.The corresponding a line handwriting input inputted in vertical input pattern is divided into one or more along vertical writing direction
A corresponding recognition unit.
In some embodiments, when being operated in horizontal input pattern (2008): equipment detects (2010) equipment
It is orientated the variation from first orientation to second orientation.In some embodiments, in response to the variation of apparatus orientation, equipment is from level
Input pattern is switched to (2012) vertical input pattern.For example, showing such case in Figure 19 A- Figure 19 B.In some realities
It applies in example, when being operated in vertical input pattern (2014): user equipment detects that (2016) apparatus orientation takes from second
To the variation for arriving first orientation.In some embodiments, in response to the variation of apparatus orientation, user equipment is from vertical input pattern
It is switched to (2018) horizontal input pattern.In some embodiments, the association between apparatus orientation and input pattern can with it is above
It is described opposite.
In some embodiments, when being operated in horizontal input pattern (2020): user equipment is received from user
(2022) first multi-character words handwriting inputs.In response to the first multi-character words handwriting input, user equipment according to horizontal presentation direction come
(2024) first multi-character words recognition results are presented in the candidate display region of pen interface.For example, showing in fig. 19 a
Such case.In some embodiments, when being operated in vertical input pattern (2026): user equipment is received from user
(2028) second multi-character words handwriting inputs.In response to the second multi-character words handwriting input, user equipment according to vertical writing direction come
(2030) second multi-character words recognition results are presented in candidate display region.For example, being shown in Figure 19 C and Figure 19 E this
Situation.
In some embodiments, user equipment receives the first user that (2032) are used to select the first multi-character words recognition result
Input, such as shown in Figure 19 A- Figure 19 B, wherein utilizing the input for changing input direction (for example, slewing or choosing
It selects and shows and can indicate 1908) impliedly to make a choice.User equipment receives (2034) for selecting the second multi-character words recognition result
Second user input, such as shown in Figure 19 C or Figure 19 E.User equipment is currently in the input text area of pen interface
The corresponding text that (2036) first multi-character words recognition results and the second multi-character words recognition result are shown in domain, wherein according to horizontal book
Direction is write to show the corresponding text of the first multi-character words recognition result, and shows the second multi-character words according to vertical writing direction
The corresponding text of recognition result.For example, showing such case in the text input area of Figure 19 F 808.
In some embodiments, handwriting input region receives the multirow handwriting input on horizontal presentation direction, and has silent
The paragraph direction from top to bottom recognized.In some embodiments, horizontal presentation direction is from left to right.In some embodiments
In, horizontal presentation direction is from right to left.In some embodiments, handwriting input region receives more on vertical writing direction
Row handwriting input, and there is the paragraph direction from left to right of default.In some embodiments, handwriting input region receives vertical
Multirow handwriting input on presentation direction, and there is the paragraph direction from right to left of default.In some embodiments, vertical book
Writing direction is from top to bottom.In some embodiments, first orientation default is horizontal orientation, and second orientation is defaulted as indulging
To orientation.In some embodiments, user equipment provides in pen interface shows and can indicate accordingly, in level
Manual switching is carried out between input pattern and vertical input pattern, without considering apparatus orientation.In some embodiments, Yong Hushe
Standby provide in pen interface is shown and can be indicated accordingly, for being cut manually between two kinds of optional presentation directions
It changes.In some embodiments, user equipment provides in pen interface shows and can indicate accordingly, for optional at two kinds
Manual switching is carried out between paragraph direction.In some embodiments, show that can indicate is passed through when primary or continuous several times are called
What every kind of possible combination in input direction and paragraph direction was rotated toggles button.
In some embodiments, user equipment receives (2038) handwriting input from user.Handwriting input is included in hand-written defeated
The multiple handwritten strokes provided in the handwriting input region at interface are provided.In response to handwriting input, user equipment is in handwriting input circle
(2040) one or more recognition results are shown in the candidate display region in face.One or more is shown in candidate display region
When recognition result, user equipment detection (2042) is for being switched to alternative handwriting input mode from current handwriting input mode
User input.Input (2044) in response to user: user equipment switches (2046) to alternative from current handwriting input mode
Handwriting input mode.In some embodiments, user equipment removes (2048) handwriting input from handwriting input region.In some realities
It applies in example, user equipment automatically enters (2050) into the text input area of pen interface and shows in candidate display region
The sequence in one or more recognition results shown is near preceding recognition result.For example, being shown in Figure 19 A- Figure 19 B this
Situation, wherein current handwriting input mode is horizontal input pattern, and selecting handwriting input mode else is vertical input pattern.?
In some embodiments, current handwriting input mode is vertical input pattern, and selecting handwriting input mode else is horizontal input mould
Formula.In some embodiments, current handwriting input mode with optionally to be to provide any two different hand-written defeated for handwriting input mode
Enter the mode in direction or paragraph direction.In some embodiments, user's input is that (2052) rotate to equipment from current orientation
Different orientation.In some embodiments, user's input is to call to show and can indicate to be manually switched to from current handwriting input mode
Optionally handwriting input mode.
As described herein, handwriting input module allows user according to sequentially inputting handwritten stroke and/or word any time
Symbol.Therefore, it deletes each hand-written character in multiword symbol handwriting input and rewrites phase at position identical with deleted character
Same or different hand-written character is advantageous, entire without deleting because this, which may consequently contribute to user, corrects long handwriting input
Handwriting input.
Figure 21 A- Figure 21 H shows exemplary user interface, for visually highlighting and/or deleting in hand
Write the recognition unit identified in the multiple handwritten strokes currently accumulated in input area.Multiword is allowed to accord in user equipment even more
When row handwriting input, user is allowed to select, check and delete any in the multiple recognition units identified in multiple inputs one by one
A recognition unit is particularly useful.By allowing user to delete the specific identification unit of handwriting input beginning or centre, allow
User makes correction to long input, and all recognition units after undesirable recognition unit are deleted without user.
As shown in Figure 21 A- Figure 21 C, user mentions in the handwriting input region 804 of handwriting input user interface 802
Multiple handwritten strokes (for example, stroke 2102,2104 and 2106) is supplied.Continue to provide to handwriting input region 804 in user attached
When adding stroke, user equipment updates the recognition unit identified in the handwriting input currently accumulated from handwriting input region, and
Recognition result is corrected according to the output character identified from the recognition unit of update.As shown in Figure 20 C, user equipment from
Two recognition units are identified in current handwriting input, and three recognition result (examples respectively including two Chinese characters are presented
Such as, 2108,2010 and 2112).
In this example, after user has write two hand-written characters, user recognizes that the first recognition unit is not correct
It writes, and as a result, user equipment not yet identifies and present in candidate display region desired recognition result.
In some embodiments, Flick gesture is provided on the touch sensitive display (for example, contact, is followed by phase in user
With lifting at position at once) when, Flick gesture is construed to so that being visually highlighted on handwriting input by user equipment
The input of each recognition unit currently identified in region.In some embodiments, using another predetermined gesture (example
Such as, more fingers of handwriting input overlying regions gently sweep gesture) that user equipment is highlighted is each in handwriting input region 804
A recognition unit.Sometimes preferred Flick gesture, because it is relatively easily distinguished with handwritten stroke, handwritten stroke is usually related to
And the longer time continuous contact and in handwriting input region 804 have contact movement.Sometimes preferred multi-touch gesture,
Because it is relatively easily distinguished with handwritten stroke, handwritten stroke is usually directed to the single in handwriting input region 804
Contact.In some embodiments, user equipment provides in the user interface to be called by user (for example, by contact 2114) with
So that 2112 can be indicated by visually highlighting showing for each recognition unit (for example, as shown in frame 2108 and 2110).One
In a little embodiments, when having, sufficient screen space receiving is such to show that preferably showing can indicate when can indicate.In some embodiments,
It repeatedly can continuously be called by user and show and can indicate, this highlights user equipment visually according in segmentation grid
One or more recognition units of difference segmentation chain identification, and highlighted for being closed when having shown that all segmentation chains.
As shown in figure 21d, necessary gesture is provided to highlight each knowledge in handwriting input region 804 in user
When other unit, user equipment shows that corresponding deletion is shown also above each highlighted recognition unit and can indicate (for example, small
Delete button 2116 and 2118).Figure 21 E- Figure 21 F, which is shown, touches (for example, via contact 2120) corresponding identification in user
When (for example, delete button 2116 for the first recognition unit in frame 2118) is shown and can be indicated in the deletion of unit, from handwriting input
Region 804 removes corresponding recognition unit (for example, in frame 2118).In this particular instance, the recognition unit deleted is not
It is the recognition unit recently entered in time, nor recognition unit spatially last along presentation direction.In other words,
User can delete any recognition unit, regardless of where and when it is provided in handwriting input region.Figure 21 F shows response
In deleting the first recognition unit in handwriting input region, user equipment also updates the identification shown in candidate display region 806
As a result.As shown in Figure 21 F, user equipment also deletes candidate characters corresponding with the recognition unit deleted from recognition result.Cause
And new recognition result 2120 is displayed in candidate display region 806.
As shown in Figure 21 G- Figure 21 H, from pen interface 804 remove the first recognition unit after, Yong Huyi
Multiple new handwritten strokes 2122 are being provided by being deleted in the region that recognition unit had previously occupied.User equipment divides again
Cut the handwriting input currently accumulated in handwriting input region 804.Based on the recognition unit identified from handwriting input, Yong Hushe
It is standby that recognition result (for example, result 2124 and 2126) is regenerated in candidate display region 806.Figure 21 G- Figure 21 H is shown
When user (for example, passing through contact 2128) selected recognition result (for example, result 2124) in recognition result, by institute
The text input of the recognition result of selection is into text input area 808.
Figure 22 A- Figure 22 B is the flow chart for example process 2200, wherein visually presenting and can independently delete
The each recognition unit identified in current handwriting input, forms the time sequencing of recognition unit without consideration.Figure 21 A- Figure 21 H shows
Process 2200 in accordance with some embodiments is gone out.
In example process 2200, user equipment receives (2202) handwriting input from user.Handwriting input is included in coupling
The multiple handwritten strokes provided on the touch sensitive surface of equipment are provided.In some embodiments, user equipment is in pen interface
Handwriting input region (for example, handwriting input region 804) in render (2204) multiple handwritten strokes.In some embodiments,
Multiple handwritten strokes are divided (2206) into two or more recognition units by user equipment, and each recognition unit includes multiple hands
Write the respective subset of stroke.
In some embodiments, user equipment receives (2208) edit requests from user.In some embodiments, editor asks
Seeking Truth (2210) provided in pen interface it is predetermined show can indicate (for example, showing in Figure 21 D can indicate
2112) contact that top detects.In some embodiments, edit requests are that (2212) are preparatory true in pen interface
The Flick gesture that fixed overlying regions detect.In some embodiments, hand of the predetermined region in pen interface
It writes in input area.In some embodiments, predetermined region is outside the handwriting input region of pen interface.One
In a little embodiments, it can be used another predetermined gesture outside handwriting input region (for example, intersecting gesture, level is gently swept
Gesture vertically gently sweeps gesture, tilts and gently sweep gesture) as edit requests.Hand outside handwriting input region can easily with it is hand-written
Stroke distinguishes, because it is provided outside handwriting input region.
In some embodiments, in response to edit requests, user equipment is in handwriting input region for example using in Figure 21 D
Frame 2108 and 2110 visually distinguish (2214) two or more recognition units.In some embodiments, in vision
Upper two or more recognition units of distinguishing further comprise that (2216) highlight two or more in handwriting input region
Corresponding boundary between a recognition unit.In various embodiments, it may be used at visually to distinguish and be identified in current handwriting input
Recognition unit different modes.
In some embodiments, user equipment provides (2218) and is used to independently delete two or more from handwriting input region
The device of each recognition unit in a recognition unit.In some embodiments, for independently deleting two or more identifications
The device of each recognition unit in unit is the corresponding delete button of display adjacent to each recognition unit, such as such as Figure 21 D
In delete button 2116 and 2118 shown in.In some embodiments, for independently deleting in two or more recognition units
Each recognition unit device be for detected above each recognition unit it is predetermined delete gesture input device.
In some embodiments, user equipment is not visibly displayed each deletion above highlighted recognition unit and shows and can indicate.
On the contrary, in some embodiments, allowing user using gesture is deleted to delete the corresponding recognition unit below the deletion gesture.?
In some embodiments, when user equipment shows recognition unit in such a way that vision is highlighted, user equipment does not receive hand-written
Additional handwritten stroke in input area.On the contrary, inspection above predetermined gesture or visually highlighted recognition unit
Any gesture measured is incited somebody to action so that user equipment removes recognition unit from handwriting input region, and correspondingly corrects in candidate display
The recognition result shown in region.In some embodiments, Flick gesture highlights user equipment visually hand-written
The each recognition unit identified in identification region, and then delete button can be used to come independently with opposite presentation direction for user
Delete each recognition unit.
In some embodiments, user equipment receives (2224) from user and by provided device and deletes input,
For independently deleting the first recognition unit in two or more recognition units, such as such as Figure 21 E from handwriting input region
Shown in.It is inputted in response to deleting, user equipment removes the writing pencil in (2226) first recognition units from handwriting input region
The respective subset of picture, such as shown in Figure 21 F.In some embodiments, the first recognition unit is two or more identifications
In unit spatially in initial recognition unit.In some embodiments, the first recognition unit is two or more identifications
In unit spatially in intermediate recognition unit, such as shown in Figure 21 E- Figure 21 F.In some embodiments, first knows
Other unit is in two or more recognition units spatially in the recognition unit at end.
In some embodiments, user equipment generates (2228) segmentation grid, the segmentation grid packet from multiple handwritten strokes
Multiple alternate segments chains are included, multiple alternate segments chain respectively indicates the corresponding set of identification list identified from multiple handwritten strokes
Member.For example, Figure 21 G shows recognition result 2024 and 2026, wherein recognition result 2024 is that there are two recognition units from tool
What one segmentation chain generated, and recognition result 2026 is that there are three another segmentation chains of recognition unit to generate from tool.One
In a little embodiments, user equipment receives (2230) two or more continuous edit requests from user.For example, two or more
Continuous edit requests can be several that showing in Figure 21 G can indicate on 2112 and continuous tap.In some embodiments, it responds
In the continuous edit requests of each of two or more continuous edit requests, user equipment is visually by corresponding set of identification
Unit distinguishes (2232) from the different alternate segments chains in multiple alternate segments chains in handwriting input region.For example, response
In the first Flick gesture, two recognition units are highlighted in handwriting input region 804 (for example, respectively for character " cap "
" son "), and in response to the second Flick gesture, highlight three recognition units (for example, respectively for character " towel ",
" emitting " and " son ").In some embodiments, optionally prominent from all recognition units removal vision in response to third Flick gesture
It shows out, and handwriting input region is made to return to the normal condition for getting out receive additional stroke.In some embodiments, it uses
Family equipment provides (2234) for every in the independent corresponding set of recognition unit currently indicated deleted in handwriting input region
The device of a recognition unit.In some embodiments, which is each deletion for each highlighted recognition unit
Button.In some embodiments, which is for detecting predetermined delete above each highlighted recognition unit
Except gesture and for calling the device for deleting the predetermined function of deleting the highlighted recognition unit below gesture.
As described herein, in some embodiments, user equipment provides continuous input pattern in handwriting input region.By
It is limited on portable user in the region in handwriting input region, therefore occasionally wants to provide one kind to customer-furnished hand
Write the mode that is cached of input, and allow user reuse screen space without submit previously provided it is hand-written defeated
Enter.In some embodiments, user equipment, which provides, rolls handwriting input region, wherein being substantial access to handwriting input region in user
End when, input area is gradually deviated to a certain amount of (for example, once offset one recognition unit).In some embodiments,
Since the existing recognition unit in offset handwriting input region may interfere with the writing process of user, and it is single to may interfere with identification
The correct segmentation of member, therefore it is sometimes advantageous without dynamic deflection recognition unit to reuse the previously used region of input area
's.In some embodiments, the area occupied by the handwriting input being not yet input in text input area is reused in user
When domain, the top recognition result for being used for handwriting input region is automatically entered into text input area, so that user can be continuous
New handwriting input is provided, without clear selected and sorted near preceding recognition result.
In some conventional systems, allow above the existing handwriting input that is still shown in handwriting input region of user into
Running hand is write.In such systems, determine whether new stroke is recognition unit or new identification list earlier using temporal information
A part of member.Such system depending on temporal information provides the speed of handwriting input to user and rhythm is proposed and strictly wanted
It asks, many users are difficult to meet this requirement.In addition, to handwriting input carry out visual render may be user be difficult to crack it is mixed
Random situation.Therefore, writing process may allow people to baffle and confuse user, so as to cause bad user experience.
As described herein, it indicates when user can reuse using the process of living in retirement to be accounted for by the recognition unit previously write
Region, and continue to be write in handwriting input region.In some embodiments, the process of living in retirement gradually decreases in hand
It writes and the visibility of each recognition unit of threshold amount of time is provided in input area, it is existing when so that writing new stroke above it
There is text visually not compete with new stroke.In some embodiments, it writes, makes automatically above the recognition unit lived in retirement
The sequence obtained for the recognition unit is input into text input area near preceding recognition result, is stopped without user
It writes and is that sequence clearly provides selection input near preceding recognition result.It is this to sequence near preceding recognition result hint
With automatically confirm that the input efficiency and speed for improving pen interface, and alleviate to user apply cognitive load, with
The thinking for keeping current text to write is smooth.In some embodiments, carrying out writing above the recognition unit lived in retirement will not lead
Cause automatically selects sequence near preceding search result.On the contrary, the recognition unit lived in retirement can be cached in handwriting input storehouse high speed,
And it combines with new handwriting input as current handwriting input.User can be seen before making a choice based in handwriting input
The recognition result that all recognition results accumulated in storehouse generate.
Figure 23 A- Figure 23 J shows exemplary user interface and process, wherein for example after the time of predetermined volume,
The recognition unit provided in the different zones in handwriting input region gradually fades out from its corresponding region, and in a particular area
After fading out, user is allowed to provide new handwritten stroke in this region.
As shown in Figure 23 A, user provides multiple handwritten strokes 2302 (for example, needle in pen interface 804
To three handwritten strokes of capitalization " I ").Handwritten stroke 2302 is identified as recognition unit by user equipment.In some implementations
In example, present illustrated handwriting input is cached in the handwriting input storehouse of user equipment in handwriting input region 804
In first layer in.Several recognition results generated based on the recognition unit identified are provided in candidate display region 804.
Figure 23 B is shown when user continues to write one or more strokes 2302 to the right of stroke 2304, and first knows
The beginning of handwritten stroke 2302 in other unit is gradually faded out in handwriting input region 804.In some embodiments, animation is shown
Gradually fading out or dissipating with the visual render of the first recognition unit of simulation.For example, animation can produce what ink was evaporated from blank
Visual effect.In some embodiments, in entire recognition unit, fading out for recognition unit is not uniform.In some implementations
In example, recognition unit fades out as the time increases, and final recognition unit is completely invisible in handwriting area.However,
Even if recognition unit is no longer visible in handwriting input region 804, but in some embodiments, sightless recognition unit is still
It is so retained at the top of handwriting input storehouse, and continues to be shown in candidate display area from the recognition result that recognition unit generates
In domain.In some embodiments, the recognition unit to fade out is not removed completely from view, it is new hand-written until being written above it
Input.
In some embodiments, user equipment permission is just accounted in the recognition unit by fading out at once when the animation that fades out starts
According to overlying regions new handwriting input is provided.In some embodiments, user equipment allows only to proceed to specific rank fading out
It is just occupied in the recognition unit by fading out until section (for example, most light level or completely invisible in this region until identifying)
Overlying regions new handwriting input is provided.
Figure 23 C shows the first recognition unit (that is, stroke 2302) and its process of fading out is completed (for example, ink color is
Stablize in very light level or have become invisible).User equipment is identified from the additional handwritten stroke that user provides
Additional identification unit (for example, the recognition unit for being directed to hand-written letter " a " and " m "), and presented in candidate display region 804
The recognition result of update.
Figure 23 D- Figure 23 F is shown over time and the user provides in handwriting input region 804
Multiple additional handwritten strokes (for example, 2304 and 2306).Meanwhile the recognition unit being previously identified is gradually from handwriting input region
804 fade out.In some embodiments, after having identified recognition unit, starting its process of fading out for each recognition unit is needed
Spend the time of predetermined volume.In some embodiments, will not start for the process of fading out of each recognition unit, until
User has started to input the second recognition unit downstream from it.As shown in Figure 23 B- Figure 23 F, when hand-written to provide with rapid style of writing style
When input, single stroke (for example, stroke 2304 or stroke 2306) is possibly through multiple recognition units in handwriting input region
(for example, for recognition unit of each hand-written letter in words " am " or " back ").
Figure 23 G is shown even if after recognition unit has started its process of fading out, and user can still be by predefining
Recovery input such as delete button 2310 on Flick gesture (for example, as by following closely the contact lifted at once 2308 indicates)
It is returned to state of not fading out.When restoring recognition unit, appearance is horizontal back to normal visibility.In some embodiments
In, the recognition unit that fades out character by character on the opposite direction of the presentation direction in handwriting input region 804 it is extensive
It is multiple.In some embodiments, the recovery of the recognition unit to fade out to words one by one in handwriting input region 804.Such as figure
Shown in 23G, make to be restored to state of not fading out completely from the state of fading out completely with the recognition unit of words " back ".Some
In embodiment, when recognition unit is reverted to do not fade out state when, reset for each recognition unit for start faded out
The clock of journey.
Figure 23 H shows the continuous contact in delete button and to delete default presentation direction from handwriting input region 804
On last recognition unit (for example, recognition unit for " k " alphabetical in words " back ").Due to delete input always by
It keeps, therefore independently deletes more recognition units (for example, for letter in words " back " on opposite presentation direction
The recognition unit of " c ", " a ", " b ").In some embodiments, the deletion of recognition unit carries out to words one by one, and same
When all letters of hand-written words " back " for being deleted from handwriting input region 804 of removal.Figure 23 H is also shown due to deleting
For the contact 2308 that is kept in delete button 2310 after the recognition unit of the letter " b " in hand-written words " back ", because
This recognition unit previously to have faded out " m " is also resumed.
If Figure 23 I is shown stops the deletion before deleting the recognition unit " m " restored in hand-written words " am "
Input, the recognition unit of recovery will gradually fade out again.In some embodiments, it keeps and updates every in handwriting input storehouse
The state (for example, fade out from one group of one or more to the state selected in state and state of not fading out) of a recognition unit.
Figure 23 J is shown in some embodiments when user is by the recognition unit being faded out in handwriting input region
When the overlying regions that (for example, for the recognition unit of alphabetical " I ") occupies provide one or more strokes 2312, by stroke
2312 be automatically entered into text input area 808 before make the sequence for handwriting input near preceding recognition result (example
Such as, text as a result 2314), as shown in Figure 23 I-23J.As shown in Figure 23 J, text " I am " is no longer illustrated as tentative
, but be submitted in text input area 808.In some embodiments, it fades out completely or part is light once being directed to
Text input is made in handwriting input out, just removes handwriting input from handwriting input storehouse.The stroke newly inputted is (for example, pen
Draw the current input 2312) become in handwriting input storehouse.
As shown in Figure 23 J, text " I am " is no longer illustrated as experimental, but has been submitted in text input area
In 808.In some embodiments, fade out completely or text input is made in the handwriting input faded out of part once being directed to, just from
Handwriting input is removed in handwriting input storehouse.The stroke (for example, stroke 2312) newly inputted becomes working as in handwriting input storehouse
Preceding input.
In some embodiments, the recognition unit being faded out in by handwriting input region is (for example, for alphabetical " I "
Recognition unit) overlying regions that occupy are when providing stroke 2312, will not will be made before stroke 2312 for handwriting input
Sequence be automatically entered into text input area 808 near the text of preceding recognition result (for example, result 2314).On the contrary, clear
Except the current handwriting input (fading out and the two that is not fading out) in handwriting input region 804, and in handwriting input storehouse into
Row cache.New stroke 2312 is attached to the handwriting input of the cache in handwriting input storehouse.User equipment base
The integrality of the handwriting input currently accumulated in handwriting input storehouse determines recognition result.It is shown in candidate display region
Show recognition result.In other words, even if only showing a part for the handwriting input currently accumulated in handwriting input region 804,
Also based on entire handwriting input (both the visible part and no longer visible part) next life of the cache in handwriting input storehouse
At recognition result.
Figure 23 K shows user and has input more strokes 2316 in the handwriting input region 804 faded out at any time.Figure
23L shows the new stroke 2318 write above the stroke 2312 and 2316 that fades out and makes that 2312 He of stroke of fading out will be directed to
The text input of 2316 top recognition result 2320 is into text input area 808.
In some embodiments, user optionally provides handwriting input in multirow.In some embodiments, more in enabling
When row input, it can be used and identical fade out process to remove handwriting input region, for new handwriting input.
Figure 24 A- Figure 24 B is for providing the exemplary mistake for process of fading out in the handwriting input region of pen interface
The flow chart of journey 2400.Figure 23 A- Figure 23 K shows process 2400 in accordance with some embodiments.
In some embodiments, equipment receives (2402) first handwriting inputs from user.First handwriting input includes multiple
Handwritten stroke, and the multiple handwritten stroke forms edge and the associated corresponding book in the handwriting input region of pen interface
Write multiple recognition units of directional spreding.In some embodiments, when user provides handwritten stroke, user equipment is hand-written defeated
Enter each handwritten stroke rendered in (2404) multiple handwritten strokes in region.
In some embodiments, user equipment is after rendering recognition unit completely, for every in multiple recognition units
A recognition unit fades out process accordingly to start (2406).In some embodiments, during process of fading out accordingly, first
The rendering of recognition unit in handwriting input is faded out.According to some embodiments, such case is shown in Figure 23 A- Figure 23 F.
In some embodiments, user equipment receives (2408) by the identification list to fade out in multiple recognition units from user
Second handwriting input of the overlying regions in the handwriting input region that member occupies, such as such as Figure 23 I- Figure 23 J and Figure 23 K- Figure 23 L
Shown in.In some embodiments, in response to receiving the second handwriting input (2410): user equipment is in handwriting input region
It renders (2412) second handwriting inputs and removes (2414) all recognition units to fade out from handwriting input region.In some implementations
Example in, no matter whether recognition unit starts its process of fading out, from handwriting input region remove the second handwriting input before
All recognition units are inputted in handwriting input region.For example, showing this feelings in Figure 23 I- Figure 23 J and Figure 23 K- Figure 23 L
Condition.
In some embodiments, user equipment generates (2416) for one or more identification knots of the first handwriting input
Fruit.In some embodiments, user equipment shows (2418) one or more in the candidate display region of pen interface
Recognition result.In some embodiments, in response to receiving the second handwriting input, user equipment comes from trend without user's selection
The sequence shown in (2420) candidate display region is inputted in the text input area of pen interface near preceding identification knot
Fruit.For example, showing such case in Figure 23 I- Figure 23 J and Figure 23 K- Figure 23 L.
In some embodiments, user equipment storage (2422) includes the input of the first handwriting input and the second handwriting input
Storehouse.In some embodiments, user equipment generates (2424) one or more more character identification results, one or more of
More character identification results respectively include the corresponding of the character identified from the cascade form of the first handwriting input and the second handwriting input
Spatial sequence.In some embodiments, user equipment shows (2426) one in the candidate display region of pen interface
Or multiple more character identification results, while the rendering of the second handwriting input has been replaced hand-written to first in hand-written input area
The rendering of input.
In some embodiments, after user completes recognition unit after the predetermined period in the past, for every
A recognition unit starts process of accordingly fading out.
In some embodiments, when user starts entering stroke for next recognition unit after the recognition unit,
Start the process of fading out for each recognition unit.
In some embodiments, the end-state for the process of accordingly fading out of each recognition unit is for recognition unit
State with predetermined minimum visibility.
In some embodiments, the end-state for the process of accordingly fading out of each recognition unit is for recognition unit
State with zero visibility.
In some embodiments, after the last recognition unit in the first handwriting input fades out, user equipment from
Family receives (2428) predetermined recovery input.In response to receiving predetermined recovery input, user equipment will be last
Recognition unit restore (2430) to state of not fading out from state of fading out.For example, showing this feelings in Figure 23 F- Figure 23 H
Condition.In some embodiments, predetermined recovery input is detected in the delete button provided in pen interface
Initial contact.In some embodiments, the continuous contact detected in delete button is deleted last from handwriting input region
Recognition unit, and the recognition unit of the second recognition unit to the end is restored to state of not fading out from the state of fading out.For example,
Such case is shown in Figure 23 G- Figure 23 H.
As described herein, more text handwriting recognition model classifying hand-written characters execute it is unrelated with stroke order and with stroke side
To unrelated identification.In some embodiments, only for writing corresponding with the kinds of characters in handwriting recognition model vocabulary
The space for including in the flat image of sample exports feature to train identification model.Image due to writing sample does not include and figure
As in include the relevant any time information of each stroke, therefore resulting identification model it is unrelated with stroke order and with pen
It is unrelated to draw direction.
As described above, the handwriting recognition unrelated with stroke order and stroke direction provides many relative to conventional identification systems
Advantage, the conventional identification systems are dependent on information relevant to the generation of the time of character (for example, the time of the stroke in character
Sequentially).However, there is temporal information relevant to each stroke can be used in real-time handwriting recognition situation, and sometimes with
This information is beneficial come the identification accuracy for improving hand-written discrimination system.It is described below a kind of by stroke derived from the time point
Cloth information integration to hand-written identification model space characteristics extract in technology, when used between derived stroke distributed intelligence
The stroke order and/or stroke direction independence of hand-written discrimination system will not be destroyed.Based on stroke relevant to kinds of characters point
Cloth information is distinguished between the appearance similar character of the stroke generation using dramatically different group and is possibly realized.
In some embodiments, handwriting input is being converted into be used for the input figure of handwriting recognition model (for example, CNN)
When as (for example, input bitmap image), temporal information associated with each stroke is lost.For example, for Chinese character " state ",
Eight strokes (labeled as the #1-#8 in Figure 27) can be used to write the Chinese character.For the character stroke sequence and
Direction provides certain uniqueness characteristics associated with the character.Stroke order information and stroke direction information are captured without broken
The bad mode independently of a kind of not test (N.T.) of the stroke order and stroke direction of identifying system is clear piece in training sample
All possible permutation and combination in terms of act stroke order and stroke direction.Even for complexity only moderate character and
Speech, this can also have more than 1,000,000,000 kinds of possibilities, this make it is infeasible in practice, even if be not impossible.Such as this paper institute
It states, generates stroke distribution overview for each writing sample, (that is, the time believes in terms of taking out the time of stroke generation
Breath).Training writes the stroke distribution overview of sample to extract one group of time export feature, next by they with (for example, coming from
Input bitmap image) space export feature combination, the stroke order to improve identification accuracy, without influencing hand-written discrimination system
With stroke direction independence.
As described herein, associated with character to extract to characterize each handwritten stroke by calculating a variety of pixel distributions
Temporal information.When projecting to assigned direction, each handwritten stroke of character obtains certainty pattern (or shape).Although this
Pattern itself may be not enough to positively identify stroke, but when to other similar combinations of patterns, may be enough to capture this
The intrinsic specific feature of specific stroke.In order by this stroke representation with spatial extraction feature (for example, based on defeated in CNN
Enter the feature extraction of image) it is integrated provide between the similar character of appearance that can be used in the glossary of handwriting recognition model into
The quadrature information that row disambiguates.
Figure 25 A- Figure 25 B is the time export feature and sky for integrating handwriting samples during training handwriting recognition model
Between export feature example process 2500 flow chart, obtained in identification model be kept separate from stroke order and stroke
Direction.In some embodiments, the clothes for the identification model trained are being provided to user equipment (for example, portable device 100)
Example process 2500 is executed in device equipment of being engaged in.In some embodiments, server apparatus include one or more processors and
Memory comprising instruction, the instruction are used for implementation procedure 2500 when executed by one or more processors.
In example process 2500, equipment independently trains one group of space of (2502) handwriting recognition model to export feature
Feature is exported with one group of time, wherein for the figure for the handwriting samples for being respectively the respective symbols concentrated for corresponding output character
The corpus of the training image of picture trains this group of space export feature, and trains this group of time for stroke distribution overview
Feature is exported, each stroke distribution overview is characterized in a digital manner in the handwriting samples for the respective symbols concentrated for output character
Multiple strokes spatial distribution.
In some embodiments, this group of space export feature of stand-alone training further comprises that (2504) training has input
The convolutional neural networks of layer, output layer and multiple convolutional layers, the convolutional layer include the first convolutional layer, last convolutional layer, the first volume
Hiding between zero or more intermediate convolutional layer between lamination and last convolutional layer, and last convolutional layer and output layer
Layer.Exemplary convolutional network 2602 is shown in Figure 26.The side essentially identical with convolutional network 602 shown in Fig. 6 can be passed through
Formula carrys out implementation example convolutional network 2602.Convolutional network 2602 includes input layer 2606, output layer 2608, multiple convolutional layers,
Multiple convolutional layer includes the first convolutional layer 2610a, zero or more intermediate convolutional layer and last convolutional layer 2610n, with
And the hidden layer 2614 between last convolutional layer and output layer 2608.Convolutional network 2602 further includes the cloth according to shown in Fig. 6
The inner nuclear layer 2616 and sub-sampling layer 2612 set.The training of convolutional network is based on the writing sample in training corpus 2604
Image 2614.It obtains space and exports feature, and by minimizing the identification error of the training sample in training corpus come really
Fixed respective weights associated with different characteristic.Once identical feature and weight are just trained language for identification by training
The new handwriting samples being not present in material library.
In some embodiments, this group of time export feature of stand-alone training further comprises that (2506) are provided to statistical model
Multiple stroke distribution overviews, with parameter derived from determination multiple times and for the respective weights of parameter derived from multiple times,
For classifying to the respective symbols that output character is concentrated.In some embodiments, as shown in Figure 26, from training corpus
Each writing sample in library 2622 exports stroke distribution overview 2620.Training corpus 2622 optionally includes and corpus
2604 identical writing samples, but further include temporal information associated with the stroke generation in each writing sample.To statistics
Modeling process 2624 provides stroke distribution overview 2622, and extraction time exports feature and by being based on statistical modeling side during this period
Method (for example, CNN, K- nearest-neighbors etc.) minimizes identification or error in classification to determine the respective weights for different characteristic.
As shown in Figure 26, this group of time export feature and respective weights are converted into one group of characteristic vector (for example, characteristic vector 2626
Or characteristic vector 2628) and inject in the equivalent layer of convolutional neural networks 2602.Therefore resulting network includes orthogonal
Space derived parameter and time derived parameter, and make contributions jointly to the identification of character.
In some embodiments, this group of space export feature and the group in equipment combination (2508) handwriting recognition model
Time exports feature.In some embodiments, this group of space export feature and this group of time combined in handwriting recognition model is led
Feature includes that (2510) inject multiple space derived parameters into one of the convolutional layer of convolutional neural networks or hidden layer out
With multiple time derived parameters.In some embodiments, to the last convolutional layer (example of the convolutional neural networks for handwriting recognition
Such as, the last convolutional layer 2610n in Figure 26) in the multiple time derived parameters of injection and the phase for multiple time derived parameter
Answer weight.In some embodiments, to the hidden layer of the convolutional neural networks for handwriting recognition (for example, hiding in Figure 26
Layer is 2614) middle to inject multiple time derived parameters and the respective weights for multiple time derived parameters.
In some embodiments, which provides (2512) for the handwriting input of user in fact using handwriting recognition model
When handwriting recognition.
In some embodiments, which generates the corpus of (2514) stroke distribution overview from multiple writing samples.?
In some embodiments, each handwriting samples in multiple handwriting samples correspond to the character that (2516) output character is concentrated, and
Independently retain additional space information when writing it for each composition stroke for writing sample.In some embodiments,
In order to generate the corpus of stroke distribution overview, which executes (2518) following steps:
For each handwriting samples (2520) in multiple handwriting samples: equipment identifies the structure in (2522) handwriting samples
At stroke;For the stroke that each of the stroke identified of handwriting samples is identified, equipment calculates (2524) along multiple pre-
First determine the corresponding duty ratio in each predetermined direction in direction, the duty ratio be each stroke direction projection across
Ratio between degree and the maximal projection span for writing sample;For each institute in the stroke of handwriting samples identified
The stroke of identification, equipment is also based on the respective pixel quantity and the total pixel number amount write in sample in each stroke
Between ratio come calculate (2526) be directed to each stroke corresponding saturation ratio.User equipment then for handwriting samples come
(2528) characteristic vector is generated, as the stroke distribution overview for writing sample, this feature vector includes at least N in handwriting samples
The corresponding duty ratio and corresponding saturation ratio of a stroke, wherein N is predetermined natural number.In some embodiments, N is less than
The maximum stroke counting observed in any single writing sample in multiple writing samples.
In some embodiments, for each handwriting samples in multiple handwriting samples: equipment is according to descending come to preparatory
Determine that the corresponding duty ratio of the stroke identified on the predetermined direction in each of direction is ranked up;And writing sample
It only include writing N number of sequence of sample near preceding duty ratio and saturation ratio in this characteristic vector.
In some embodiments, multiple predetermined directions include horizontal direction, the vertical direction, positive 45 for writing sample
Spend direction and minus 45 degree of directions.
In some embodiments, in order to use handwriting recognition model be directed to the handwriting input of user provide in real time hand-written knowledge
Not, equipment receives the handwriting input of user;And the handwriting input in response to receiving user, handwriting input is substantially same with receiving
When provide a user handwriting recognition output.
Using character shown in Figure 27 " state ", exemplary embodiment has been described for purposes of example herein.Some
In embodiment, each input picture of hand-written character is optionally normalized into square.Projecting to the horizontal, vertical of square
Directly, when+45 degree degree of diagonal sums -45 are diagonal, measure each individual handwritten stroke (for example, stroke #1, #2 ..., and #8) across
Degree.By the span of each stroke Si for four projecting directions be recorded as respectively xspan (i), yspan (i), cspan (i) and
dspan(i).In addition, also recording the maximum span observed across whole image.For four projecting directions by character it is maximum across
Degree is recorded as xspan, yspan, cspan and dspan respectively.For exemplary purposes, four projection sides are optionally considered here
To although any projection arbitrarily organized can be used in various embodiments in principle.It is shown in Figure 27 on four projecting directions
Character " state " in stroke in a stroke (for example, stroke #4) maximum span (for example, be expressed as xspan,
Yspan, cspan and dspan) and span (for example, being expressed as xspan (4), yspan (4), cspan (4) and dspan (4)).
In some embodiments, it once measuring for all strokes 1 to 5 with upper span, just calculates along each projection side
To corresponding duty ratio, wherein 5 be the quantity of each handwritten stroke associated with input picture.For example, stroke S will be directed toi
Corresponding duty ratio R in the x-directionx(i) it is calculated as Rx(i)=xspan (i)/xspan.Similarly, it can calculate along other projections
The corresponding duty ratio in direction, Ry(i)=yspan (i)/yspan, Rc(i)=cspan (i)/cspan, Rd(i)=dspan (i)/
dspan。
In some embodiments, independently the duty ratio of all strokes in each direction is ranked up according to descending, and
And for its duty ratio in this direction, the corresponding of all strokes in input picture is obtained for each projecting direction
Sequence.Sequence of the stroke on each projecting direction reflects each stroke along the relative importance for being associated projecting direction.This
It is unrelated that sequence and the direction of stroke are generated in kind relative importance and writing sample.Therefore, this sequence based on duty ratio is
Independently of the time derived information of stroke order and stroke direction.
In some embodiments, phase of the stroke relative to the importance of entire character is used to indicate for the imparting of each stroke
To weight.In some embodiments, it is surveyed by the ratio of the total number of pixels in the pixel quantity and character in each stroke
Measure weight.This ratio is referred to as saturation ratio associated with each stroke.
In some embodiments, duty ratio and saturation ratio based on each stroke can create feature for each stroke
Vector.For each character, creation includes one group of characteristic vector of 5S feature.This group of feature is referred to as the stroke distribution of character
Overview.
In some embodiments, when constructing the stroke distribution overview of each character using only the sequence of predetermined number
Near preceding stroke.In some embodiments, the predetermined number of stroke is 10.It, can be for every based on preceding ten strokes
A character generates feature derived from 50 strokes.In some embodiments, most by these features injection convolutional neural networks
Convolutional layer or subsequent hidden layer afterwards.
In some embodiments, during real-time identification, to having utilized, space exports feature and the time exports both features
The handwriting recognition mode trained provides the input picture of recognition unit.Pass through each of handwriting recognition model shown in Figure 26
Layer handles input picture.Reach the layer for needing stroke distribution overview to input (for example, last convolutional layer in the processing of input picture
Or hidden layer) when, the stroke distribution overview of recognition unit is injected into this layer.It continues with input picture and stroke distribution is general
Condition, until providing output category (for example, one or more candidate characters) in output layer 2608.In some embodiments, it counts
The stroke distribution overview of all recognition units is calculated, and provides the pen to handwriting recognition model together with the input picture of recognition unit
Distribution overview is drawn as input.In some embodiments, the input picture of recognition unit initially passes through handwriting recognition model and (does not have
The benefit of time training characteristics).When identifying the similar candidate word of two or more appearances with close recognition confidence value
Fu Shi, then to handwriting recognition at the layer (for example, last convolutional layer or hidden layer) for having utilized time export feature to train
The stroke distribution overview of recognition unit is injected in model.Hand is conveyed through in the input picture and stroke distribution overview of recognition unit
When writing the final layer of identification model, due to the difference of its stroke distribution overview, it can better discriminate between outside two or more
See similar candidate characters.Therefore, using to how the relevant time derived information of recognition unit formed by each handwritten stroke
Identification accuracy is improved, stroke order and stroke direction independence without will affect hand-written discrimination system.
For purposes of explanation, the description of front is described by reference to specific embodiment.However, above
Exemplary discussion is not intended to be exhausted, is also not intended to limit the invention to disclosed precise forms.According to above
Teaching content, many modifications and variations are all possible.Selection and description embodiment are to fully state this hair
Bright principle and its practical application, so that others skilled in the art, which can make full use of to have, is suitable for institute's structure
The embodiment of the invention and various of the various modifications for the special-purpose thought.
Claims (68)
Applications Claiming Priority (13)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361832921P | 2013-06-09 | 2013-06-09 | |
US201361832908P | 2013-06-09 | 2013-06-09 | |
US201361832942P | 2013-06-09 | 2013-06-09 | |
US201361832934P | 2013-06-09 | 2013-06-09 | |
US61/832,908 | 2013-06-09 | ||
US61/832,942 | 2013-06-09 | ||
US61/832,934 | 2013-06-09 | ||
US61/832,921 | 2013-06-09 | ||
US14/290,945 US9465985B2 (en) | 2013-06-09 | 2014-05-29 | Managing real-time handwriting recognition |
US14/290,945 | 2014-05-29 | ||
US14/290,935 | 2014-05-29 | ||
US14/290,935 US9898187B2 (en) | 2013-06-09 | 2014-05-29 | Managing real-time handwriting recognition |
CN201480030897.0A CN105247540B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201480030897.0A Division CN105247540B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109614846A true CN109614846A (en) | 2019-04-12 |
Family
ID=52022661
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811217821.5A Pending CN109614846A (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
CN201480030897.0A Active CN105247540B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
CN201811217822.XA Active CN109614847B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
CN201811217768.9A Active CN109614845B (en) | 2013-06-09 | 2014-05-30 | Managing real-time handwriting recognition |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201480030897.0A Active CN105247540B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
CN201811217822.XA Active CN109614847B (en) | 2013-06-09 | 2014-05-30 | Manage real-time handwriting recognition |
CN201811217768.9A Active CN109614845B (en) | 2013-06-09 | 2014-05-30 | Managing real-time handwriting recognition |
Country Status (5)
Country | Link |
---|---|
JP (8) | JP6154550B2 (en) |
KR (7) | KR102121487B1 (en) |
CN (4) | CN109614846A (en) |
HK (1) | HK1220276A1 (en) |
WO (1) | WO2014200736A1 (en) |
Families Citing this family (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8074172B2 (en) | 2007-01-05 | 2011-12-06 | Apple Inc. | Method, system, and graphical user interface for providing word recommendations |
US9465985B2 (en) | 2013-06-09 | 2016-10-11 | Apple Inc. | Managing real-time handwriting recognition |
US10114544B2 (en) * | 2015-06-06 | 2018-10-30 | Apple Inc. | Systems and methods for generating and providing intelligent time to leave reminders |
US10013603B2 (en) * | 2016-01-20 | 2018-07-03 | Myscript | System and method for recognizing multiple object structure |
KR102482850B1 (en) * | 2016-02-15 | 2022-12-29 | 삼성전자 주식회사 | Electronic device and method for providing handwriting calibration function thereof |
CN107220655A (en) * | 2016-03-22 | 2017-09-29 | 华南理工大学 | A kind of hand-written, printed text sorting technique based on deep learning |
US20170308289A1 (en) * | 2016-04-20 | 2017-10-26 | Google Inc. | Iconographic symbol search within a graphical keyboard |
JP6728993B2 (en) * | 2016-05-31 | 2020-07-22 | 富士ゼロックス株式会社 | Writing system, information processing device, program |
JP6611346B2 (en) * | 2016-06-01 | 2019-11-27 | 日本電信電話株式会社 | Character string recognition apparatus, method, and program |
DK179329B1 (en) * | 2016-06-12 | 2018-05-07 | Apple Inc | Handwriting keyboard for monitors |
CN106126092A (en) * | 2016-06-20 | 2016-11-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
TWI633463B (en) * | 2016-06-20 | 2018-08-21 | 鴻海精密工業股份有限公司 | Text input method |
US10325018B2 (en) * | 2016-10-17 | 2019-06-18 | Google Llc | Techniques for scheduling language models and character recognition models for handwriting inputs |
CN106527875B (en) * | 2016-10-25 | 2019-11-29 | 北京小米移动软件有限公司 | Electronic recording method and device |
WO2018163005A1 (en) | 2017-03-10 | 2018-09-13 | 株式会社半導体エネルギー研究所 | Touch panel system, electronic device, and semiconductor device |
WO2018211350A1 (en) | 2017-05-19 | 2018-11-22 | Semiconductor Energy Laboratory Co., Ltd. | Machine learning method, machine learning system, and display system |
US11188158B2 (en) | 2017-06-02 | 2021-11-30 | Samsung Electronics Co., Ltd. | System and method of determining input characters based on swipe input |
KR102474245B1 (en) | 2017-06-02 | 2022-12-05 | 삼성전자주식회사 | System and method for determinig input character based on swipe input |
US10481791B2 (en) * | 2017-06-07 | 2019-11-19 | Microsoft Technology Licensing, Llc | Magnified input panels |
US20190155895A1 (en) * | 2017-11-20 | 2019-05-23 | Google Llc | Electronic text pen systems and methods |
CN107861684A (en) * | 2017-11-23 | 2018-03-30 | 广州视睿电子科技有限公司 | Writing recognition method and device, storage medium and computer equipment |
KR102008845B1 (en) * | 2017-11-30 | 2019-10-21 | 굿모니터링 주식회사 | Automatic classification method of unstructured data |
CN109992124B (en) * | 2018-01-02 | 2024-05-31 | 北京搜狗科技发展有限公司 | Input method, apparatus and machine readable medium |
KR102053885B1 (en) * | 2018-03-07 | 2019-12-09 | 주식회사 엘렉시 | System, Method and Application for Analysis of Handwriting |
CN108710882A (en) * | 2018-05-11 | 2018-10-26 | 武汉科技大学 | A kind of screen rendering text recognition method based on convolutional neural networks |
JP7298290B2 (en) * | 2018-06-19 | 2023-06-27 | 株式会社リコー | HANDWRITING INPUT DISPLAY DEVICE, HANDWRITING INPUT DISPLAY METHOD AND PROGRAM |
KR101989960B1 (en) | 2018-06-21 | 2019-06-17 | 가천대학교 산학협력단 | Real-time handwriting recognition method using plurality of machine learning models, computer-readable medium having a program recorded therein for executing the same and real-time handwriting recognition system |
US11270486B2 (en) * | 2018-07-02 | 2022-03-08 | Apple Inc. | Electronic drawing with handwriting recognition |
CN109446780B (en) * | 2018-11-01 | 2020-11-27 | 北京知道创宇信息技术股份有限公司 | Identity authentication method, device and storage medium thereof |
CN109471587B (en) * | 2018-11-13 | 2020-05-12 | 掌阅科技股份有限公司 | Java virtual machine-based handwritten content display method and electronic equipment |
CN109858323A (en) * | 2018-12-07 | 2019-06-07 | 广州光大教育软件科技股份有限公司 | A kind of character hand-written recognition method and system |
CN110009027B (en) * | 2019-03-28 | 2022-07-29 | 腾讯科技(深圳)有限公司 | Image comparison method and device, storage medium and electronic device |
CN110135530B (en) * | 2019-05-16 | 2021-08-13 | 京东方科技集团股份有限公司 | Method and system for converting Chinese character font in image, computer device and medium |
US11194467B2 (en) | 2019-06-01 | 2021-12-07 | Apple Inc. | Keyboard management user interfaces |
CN110362247A (en) * | 2019-07-18 | 2019-10-22 | 江苏中威科技软件系统有限公司 | It is a set of to amplify the mode signed on electronic document |
CN112257820B (en) * | 2019-07-22 | 2024-09-03 | 珠海金山办公软件有限公司 | Information correction method and device |
KR20210017090A (en) * | 2019-08-06 | 2021-02-17 | 삼성전자주식회사 | Method and electronic device for converting handwriting input to text |
CN110942089B (en) * | 2019-11-08 | 2023-10-10 | 东北大学 | Multi-level decision-based keystroke recognition method |
EP4130966A1 (en) | 2019-11-29 | 2023-02-08 | MyScript | Gesture stroke recognition in touch-based user interface input |
US20200251217A1 (en) * | 2019-12-12 | 2020-08-06 | Renee CASSUTO | Diagnosis Method Using Image Based Machine Learning Analysis of Handwriting |
CN111078073B (en) * | 2019-12-17 | 2021-03-23 | 科大讯飞股份有限公司 | Handwriting amplification method and related device |
EP3839706B1 (en) * | 2019-12-20 | 2023-07-05 | The Swatch Group Research and Development Ltd | Method and device for determining the position of an object on a given surface |
EP3859602B1 (en) * | 2020-01-28 | 2023-08-09 | MyScript | Math detection in handwriting |
CN111355715B (en) * | 2020-02-21 | 2021-06-04 | 腾讯科技(深圳)有限公司 | Processing method, system, device, medium and electronic equipment of event to be resolved |
JP7540190B2 (en) * | 2020-05-08 | 2024-08-27 | ブラザー工業株式会社 | Editing Program |
CN111736751B (en) * | 2020-08-26 | 2021-03-26 | 深圳市千分一智能技术有限公司 | Stroke redrawing method, device and readable storage medium |
US11627799B2 (en) * | 2020-12-04 | 2023-04-18 | Keith McRobert | Slidable work surface |
US11531454B2 (en) | 2020-12-10 | 2022-12-20 | Microsoft Technology Licensing, Llc | Selecting content in ink documents using a hierarchical data structure |
US11587346B2 (en) | 2020-12-10 | 2023-02-21 | Microsoft Technology Licensing, Llc | Detecting ink gestures based on spatial and image data processing |
KR20220088166A (en) | 2020-12-18 | 2022-06-27 | 삼성전자주식회사 | Method and apparatus for recognizing handwriting inputs in a multiple user environment |
EP4057182A1 (en) | 2021-03-09 | 2022-09-14 | Société BIC | Handwriting feedback |
JP2022148901A (en) * | 2021-03-24 | 2022-10-06 | カシオ計算機株式会社 | Character recognition apparatus, character recognition method, and program |
KR20220135914A (en) * | 2021-03-31 | 2022-10-07 | 삼성전자주식회사 | Electronic device for processing handwriting input based on machine learning, operating method thereof and storage medium |
CN113190161B (en) * | 2021-04-25 | 2025-01-10 | 无锡乐骐科技股份有限公司 | An electronic writing practice method based on convolutional neural network |
EP4258094A4 (en) * | 2021-04-28 | 2024-07-10 | Samsung Electronics Co., Ltd. | ELECTRONIC DEVICE FOR PROCESSING HANDWRITTEN INPUTS AND OPERATING METHODS THEREFOR |
KR20220147832A (en) * | 2021-04-28 | 2022-11-04 | 삼성전자주식회사 | Electronic device for processing handwriting input and method of operating the same |
KR102366052B1 (en) * | 2021-05-28 | 2022-02-23 | (유)벨류이 | Writing system and method using delay time reduction processing, and low complexity distance measurement algorithm based on chirp spread spectrum for the same |
CN113673415B (en) * | 2021-08-18 | 2022-03-04 | 山东建筑大学 | Handwritten Chinese character identity authentication method and system |
EP4145264A1 (en) * | 2021-09-07 | 2023-03-08 | Ricoh Company, Ltd. | Display apparatus, carrier means, and display method |
CN113918030B (en) * | 2021-09-30 | 2024-10-15 | 北京搜狗科技发展有限公司 | Handwriting input method and device for handwriting input |
JP2023058255A (en) | 2021-10-13 | 2023-04-25 | 株式会社デンソー | Vehicle electronic key system and vehicle authentication device |
CN118946873A (en) | 2022-04-05 | 2024-11-12 | 三星电子株式会社 | Handwriting synchronization method and electronic device |
KR102468713B1 (en) * | 2022-07-07 | 2022-11-21 | 주식회사 에이치투케이 | AI- based Device and Method for Stroke Order Recognition of Korean Handwriting of Student |
CN119563157A (en) | 2022-07-14 | 2025-03-04 | 三星电子株式会社 | Electronic device and method for recognizing sentences represented by strokes |
WO2024014655A1 (en) * | 2022-07-14 | 2024-01-18 | 삼성전자 주식회사 | Electronic device and method for identifying sentence expressed by strokes |
CN115291791B (en) * | 2022-08-17 | 2024-08-06 | 维沃移动通信有限公司 | Text recognition method, device, electronic equipment and storage medium |
KR20240065997A (en) * | 2022-11-07 | 2024-05-14 | 삼성전자주식회사 | Method and apparatus for recognizing handwriting input |
CN116646911B (en) * | 2023-07-27 | 2023-10-24 | 成都华普电器有限公司 | Current sharing distribution method and system applied to digital power supply parallel mode |
CN117037186B (en) * | 2023-10-09 | 2024-01-30 | 山东维克特信息技术有限公司 | Patient data management system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101123044A (en) * | 2007-09-13 | 2008-02-13 | 无敌科技(西安)有限公司 | Chinese writing and learning method |
WO2009074047A1 (en) * | 2007-12-13 | 2009-06-18 | Shenzhen Huawei Communication Technologies Co. , Ltd. | Method, system, device and terminal for correcting touch screen error |
US20090295737A1 (en) * | 2008-05-30 | 2009-12-03 | Deborah Eileen Goldsmith | Identification of candidate characters for text input |
US8094942B1 (en) * | 2011-06-13 | 2012-01-10 | Google Inc. | Character recognition for overlapping textual user input |
EP2535844A2 (en) * | 2011-06-13 | 2012-12-19 | Google Inc. | Character recognition for overlapping textual user input |
Family Cites Families (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0614372B2 (en) * | 1984-01-23 | 1994-02-23 | 日本電信電話株式会社 | Character reading method |
JPS61272890A (en) * | 1985-05-29 | 1986-12-03 | Canon Inc | Device for recognizing handwritten character |
DE69315990T2 (en) * | 1993-07-01 | 1998-07-02 | Ibm | Pattern recognition by creating and using zone-wise features and anti-features |
JP3353954B2 (en) * | 1993-08-13 | 2002-12-09 | ソニー株式会社 | Handwriting input display method and handwriting input display device |
JPH07160827A (en) * | 1993-12-09 | 1995-06-23 | Matsushita Electric Ind Co Ltd | Handwritten stroke editing device and method therefor |
JPH07200723A (en) * | 1993-12-29 | 1995-08-04 | Canon Inc | Method and device for recognizing character |
JPH0855182A (en) * | 1994-06-10 | 1996-02-27 | Nippon Steel Corp | Handwritten character input device |
DE69523567T2 (en) * | 1994-11-14 | 2002-06-27 | Motorola, Inc. | METHOD FOR DIVIDING HANDWRITING INPUTS |
US5737443A (en) * | 1994-11-14 | 1998-04-07 | Motorola, Inc. | Method of joining handwritten input |
JP3333362B2 (en) * | 1995-04-11 | 2002-10-15 | 株式会社日立製作所 | Character input device |
TW338815B (en) * | 1995-06-05 | 1998-08-21 | Motorola Inc | Method and apparatus for character recognition of handwritten input |
JP4115568B2 (en) * | 1996-12-18 | 2008-07-09 | シャープ株式会社 | Text input device |
JPH10307675A (en) * | 1997-05-01 | 1998-11-17 | Hitachi Ltd | Handwritten character recognition method and apparatus |
US6970599B2 (en) * | 2002-07-25 | 2005-11-29 | America Online, Inc. | Chinese character handwriting recognition system |
JP4663903B2 (en) * | 2000-04-20 | 2011-04-06 | パナソニック株式会社 | Handwritten character recognition device, handwritten character recognition program, and computer-readable recording medium recording the handwritten character recognition program |
US7336827B2 (en) * | 2000-11-08 | 2008-02-26 | New York University | System, process and software arrangement for recognizing handwritten characters |
US7286141B2 (en) * | 2001-08-31 | 2007-10-23 | Fuji Xerox Co., Ltd. | Systems and methods for generating and controlling temporary digital ink |
JP2003162687A (en) * | 2001-11-28 | 2003-06-06 | Toshiba Corp | Handwritten character-inputting apparatus and handwritten character-recognizing program |
JP4212270B2 (en) * | 2001-12-07 | 2009-01-21 | シャープ株式会社 | Character input device, character input method, and program for inputting characters |
US6986106B2 (en) * | 2002-05-13 | 2006-01-10 | Microsoft Corporation | Correction widget |
JP2004213269A (en) * | 2002-12-27 | 2004-07-29 | Toshiba Corp | Character input device |
US8479112B2 (en) * | 2003-05-13 | 2013-07-02 | Microsoft Corporation | Multiple input language selection |
JP2005341387A (en) * | 2004-05-28 | 2005-12-08 | Nokia Corp | Real-time communication system, and transmitting / receiving apparatus and method used for real-time communication |
JP2006323502A (en) * | 2005-05-17 | 2006-11-30 | Canon Inc | Information processor, and its control method and program |
US7496547B2 (en) * | 2005-06-02 | 2009-02-24 | Microsoft Corporation | Handwriting recognition using a comparative neural network |
US7720316B2 (en) * | 2006-09-05 | 2010-05-18 | Microsoft Corporation | Constraint-based correction of handwriting recognition errors |
KR100859010B1 (en) * | 2006-11-01 | 2008-09-18 | 노키아 코포레이션 | Apparatus and method for handwriting recognition |
CN101311887A (en) * | 2007-05-21 | 2008-11-26 | 刘恩新 | Computer hand-written input system and input method and editing method |
JP2009110092A (en) * | 2007-10-26 | 2009-05-21 | Alps Electric Co Ltd | Input processor |
US8116569B2 (en) * | 2007-12-21 | 2012-02-14 | Microsoft Corporation | Inline handwriting recognition and correction |
CN101676838B (en) * | 2008-09-16 | 2012-05-23 | 夏普株式会社 | Input device |
US8584031B2 (en) * | 2008-11-19 | 2013-11-12 | Apple Inc. | Portable touch screen device, method, and graphical user interface for using emoji characters |
US20100166314A1 (en) * | 2008-12-30 | 2010-07-01 | Microsoft Corporation | Segment Sequence-Based Handwritten Expression Recognition |
US8391613B2 (en) | 2009-06-30 | 2013-03-05 | Oracle America, Inc. | Statistical online character recognition |
JP2011065623A (en) * | 2009-08-21 | 2011-03-31 | Sharp Corp | Information retrieving apparatus, and control method of the same |
CN101893987A (en) * | 2010-06-01 | 2010-11-24 | 华南理工大学 | A kind of handwriting input method of electronic equipment |
JP5581448B2 (en) | 2010-08-24 | 2014-08-27 | ノキア コーポレイション | Method and apparatus for grouping overlapping handwritten character strokes into one or more groups |
JP2012108871A (en) | 2010-10-26 | 2012-06-07 | Nec Corp | Information processing device and handwriting input processing method therefor |
KR101548835B1 (en) * | 2010-12-02 | 2015-09-11 | 노키아 코포레이션 | Method, apparatus, and computer program product for overlapped handwriting |
JP5550598B2 (en) | 2011-03-31 | 2014-07-16 | パナソニック株式会社 | Handwritten character input device |
WO2012140935A1 (en) | 2011-04-11 | 2012-10-18 | Necカシオモバイルコミュニケーションズ株式会社 | Information input device |
CN102135838A (en) * | 2011-05-05 | 2011-07-27 | 汉王科技股份有限公司 | Method and system for partitioned input of handwritten character string |
US8977059B2 (en) | 2011-06-03 | 2015-03-10 | Apple Inc. | Integrating feature extraction via local sequential embedding for automatic handwriting recognition |
US20130002553A1 (en) * | 2011-06-29 | 2013-01-03 | Nokia Corporation | Character entry apparatus and associated methods |
JP5330478B2 (en) * | 2011-10-14 | 2013-10-30 | 株式会社エヌ・ティ・ティ・ドコモ | Input support device, program, and pictogram input support method |
JP2013089131A (en) * | 2011-10-20 | 2013-05-13 | Kyocera Corp | Device, method and program |
CN102566933A (en) * | 2011-12-31 | 2012-07-11 | 广东步步高电子工业有限公司 | A method for effectively distinguishing command gestures and characters in full-screen handwriting |
JP6102374B2 (en) * | 2013-03-15 | 2017-03-29 | オムロン株式会社 | Reading character correction program and character reading device |
US20170045981A1 (en) * | 2015-08-10 | 2017-02-16 | Apple Inc. | Devices and Methods for Processing Touch Inputs Based on Their Intensities |
US11204787B2 (en) * | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
GB201704729D0 (en) | 2017-03-24 | 2017-05-10 | Lucite Int Uk Ltd | Method of producing methyl methacrylate or methacrylic acid |
-
2014
- 2014-05-30 WO PCT/US2014/040417 patent/WO2014200736A1/en active Application Filing
- 2014-05-30 KR KR1020197021958A patent/KR102121487B1/en active Active
- 2014-05-30 KR KR1020157033627A patent/KR101892723B1/en active Active
- 2014-05-30 KR KR1020187024261A patent/KR102005878B1/en active Active
- 2014-05-30 KR KR1020257005196A patent/KR20250029989A/en active Pending
- 2014-05-30 JP JP2016518366A patent/JP6154550B2/en active Active
- 2014-05-30 CN CN201811217821.5A patent/CN109614846A/en active Pending
- 2014-05-30 KR KR1020207016098A patent/KR102221079B1/en active Active
- 2014-05-30 KR KR1020217005264A patent/KR102347064B1/en active Active
- 2014-05-30 KR KR1020217043310A patent/KR102771373B1/en active Active
- 2014-05-30 CN CN201480030897.0A patent/CN105247540B/en active Active
- 2014-05-30 CN CN201811217822.XA patent/CN109614847B/en active Active
- 2014-05-30 CN CN201811217768.9A patent/CN109614845B/en active Active
-
2016
- 2016-07-12 HK HK16108185.0A patent/HK1220276A1/en not_active IP Right Cessation
-
2017
- 2017-06-01 JP JP2017109294A patent/JP6559184B2/en active Active
-
2019
- 2019-04-15 JP JP2019077312A patent/JP6802876B2/en active Active
-
2020
- 2020-11-27 JP JP2020197242A patent/JP6903808B2/en active Active
-
2021
- 2021-06-23 JP JP2021104255A patent/JP7011747B2/en active Active
-
2022
- 2022-01-14 JP JP2022004546A patent/JP7078808B2/en active Active
- 2022-05-19 JP JP2022082332A patent/JP7361156B2/en active Active
-
2023
- 2023-10-02 JP JP2023171414A patent/JP2023182718A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101123044A (en) * | 2007-09-13 | 2008-02-13 | 无敌科技(西安)有限公司 | Chinese writing and learning method |
WO2009074047A1 (en) * | 2007-12-13 | 2009-06-18 | Shenzhen Huawei Communication Technologies Co. , Ltd. | Method, system, device and terminal for correcting touch screen error |
US20090295737A1 (en) * | 2008-05-30 | 2009-12-03 | Deborah Eileen Goldsmith | Identification of candidate characters for text input |
US8094942B1 (en) * | 2011-06-13 | 2012-01-10 | Google Inc. | Character recognition for overlapping textual user input |
EP2535844A2 (en) * | 2011-06-13 | 2012-12-19 | Google Inc. | Character recognition for overlapping textual user input |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11816326B2 (en) | Managing real-time handwriting recognition | |
CN109614846A (en) | Manage real-time handwriting recognition | |
TWI570632B (en) | Multi-handwriting handwriting recognition using a universal recognizer | |
TWI653545B (en) | Method, system and non-transitory computer-readable media for real-time handwriting recognition | |
US20140363082A1 (en) | Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |