CN113705156B - Character processing method and device - Google Patents
Character processing method and device Download PDFInfo
- Publication number
- CN113705156B CN113705156B CN202111007595.XA CN202111007595A CN113705156B CN 113705156 B CN113705156 B CN 113705156B CN 202111007595 A CN202111007595 A CN 202111007595A CN 113705156 B CN113705156 B CN 113705156B
- Authority
- CN
- China
- Prior art keywords
- character
- displayed
- information
- string
- character string
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/109—Font handling; Temporal or kinetic typography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/38—Creation or generation of source code for implementing user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application provides a character processing method and a character processing device, wherein the character processing method comprises the steps of receiving at least one character to be displayed, determining character attribute information of each character to be displayed, splicing the at least one character to be displayed according to the character attribute information of each character to be displayed to generate a character string to be displayed, acquiring character string typesetting information of the character string to be displayed, and rendering a rendering result corresponding to the character string to be displayed in a target canvas layer according to the character string to be displayed and the character string typesetting information. The application provides a set of character rendering and editing response flow, realizes character typesetting and character editing of characters based on canvas, supports complex character patterns and supports editing under the conditions of text rotation and scaling.
Description
Technical Field
The application relates to the technical field of Internet, in particular to a character processing method. The application also relates to a character processing device, a computing device and a computer readable storage medium.
Background
Canvas-based web games or web page picture editors summarize, there is often a scene of input characters where the characters may have rich formats, such as colors, edges, shadows, text textures, borders, etc., and may also be rotated angularly, scaled in size. When characters are edited and modified, an editing program needs to display a text cursor, has text frame selection capability and supports text input shortcut keys, a built-in text input mode provided by a browser is not based on the canvas, cannot be used in the scene, cannot restore rich character patterns, cannot independently use text input capability if an open-source rendering engine is introduced, needs to be developed based on the rendering engine in a personalized mode, and is high in workload consumption.
Disclosure of Invention
In view of this, the embodiment of the application provides a character processing method. The application also relates to a character processing device, a computing device and a computer readable storage medium, which are used for solving the problem that a character input mode provided by a browser in the prior art cannot be realized in a character input scene based on canvas.
According to a first aspect of an embodiment of the present application, there is provided a character processing method, including:
Receiving at least one character to be displayed;
determining character attribute information of each character to be displayed;
Splicing at least one character to be displayed according to character attribute information of each character to be displayed to generate a character string to be displayed, and acquiring character string typesetting information of the character string to be displayed;
Rendering in a target canvas layer according to the character strings to be displayed and the character string typesetting information to generate rendering results corresponding to the character strings to be displayed.
According to a second aspect of an embodiment of the present application, there is provided a character processing apparatus including:
A receiving module configured to receive at least one character to be displayed;
a determining module configured to determine character attribute information of each character to be displayed;
The splicing module is configured to splice the at least one character to be displayed according to character attribute information of each character to be displayed to generate a character string to be displayed, and acquire character string typesetting information of the character string to be displayed;
And the rendering module is configured to render in the target canvas layer according to the character strings to be displayed and the character string typesetting information to generate rendering results corresponding to the character strings to be displayed.
According to a third aspect of embodiments of the present application, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the character processing method when executing the computer instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the character processing method.
The character processing method comprises the steps of receiving at least one character to be displayed, determining character attribute information of each character to be displayed, splicing the at least one character to be displayed according to the character attribute information of each character to be displayed to generate a character string to be displayed, obtaining character string typesetting information of the character string to be displayed, and rendering in a target canvas layer according to the character string to be displayed and the character string typesetting information to generate a rendering result corresponding to the character string to be displayed. The application provides a set of character processing method, rendering and editing response flow, realizes character typesetting and character editing of characters based on canvas, supports complex character patterns and editing under the conditions of character rotation and scaling.
Drawings
FIG. 1 is a flow chart of a character processing method according to an embodiment of the present application;
FIG. 2 is a diagram of a rendering result coordinate system according to an embodiment of the present application;
FIG. 3 is a process flow diagram of a character processing method for a web picture editor according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a character processing apparatus according to an embodiment of the present application;
FIG. 5 is a block diagram of a computing device according to one embodiment of the application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the application. As used in one or more embodiments of the application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the application. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
First, terms related to one or more embodiments of the present application will be explained.
Canvas is an HTML element that can use scripts (typically JavaScript) to draw graphics.
In canvas-based web games and web page picture editors, there are often scenes in which characters are input, in which the characters may have various rich styles, input components and textarea components in the browser provide input modes, or an open-source drawing engine is implemented based on the canvas, but the input components in the browser are not based on the canvas, and cannot restore the rich text styles, and the open-source rendering engine also has difficulty in independently applying the text input capability in an application.
Based on this, the application provides a character processing method, and the application relates to a character processing device, a computing device and a computer readable storage medium, which support character cursors, character frame selection, shortcut key editing and abundant character patterns, can perform mouse frame selection and input under the conditions of screen rotation and scaling, is easy to be implanted into a canvas-based webpage game or webpage image editor, and is described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a character processing method according to an embodiment of the present application, which specifically includes the following steps:
Step 102, at least one character to be displayed is received.
The character to be displayed is character information to be displayed in a canvas layer (canvas), and the character to be displayed can be Chinese, english, japanese, korean, numerals and the like.
The character to be displayed may be a character related to a service, for example, in a web game, a nickname "Zhang Sanabc" of the player needs to be displayed in the upper right corner of the screen, and then "Zhang Sanabc" is the character to be displayed. The client receives characters to be displayed sent by the server, and the characters are displayed in the client after text typesetting and text rendering.
In a specific embodiment provided by the application, taking a web game as an example, game data sent by a game server is received, and the game data comprises a character 'Zhang Sanabc' to be displayed.
And 104, determining character attribute information of each character to be displayed.
Each character to be displayed has corresponding character attribute information such as character position information, character color, font size, font style, maximum width, tracing, shading, horizontal alignment, numerical alignment, line spacing, and the like. The character attribute information of each character can be used for determining the style of each character in the rendering process.
In practical applications, after the characters to be displayed are acquired, it is necessary to determine display area information of the characters to be displayed, for example, in which area the characters to be displayed need to be displayed, and how large the display area is, so that the information needs to be determined according to character attribute information of each character to be displayed, and determining, based on the information, the character attribute information of each character to be displayed includes:
Measuring character width information of each character to be displayed;
And acquiring character drawing information of each character to be displayed.
After receiving the character to be displayed, the continuous character to be displayed needs to be segmented into individual characters, for example, the character to be displayed is 'Zhang Sanabc', and the individual characters are 'Zhang', 'Sanzhan', 'a', 'b', and 'c' after character segmentation. And measuring the width of each character by a measureText method of the web canvas to obtain character width information of each character to be displayed, wherein the character width information is used as character drawing information of each character to be displayed. measureText function to examine the function of the width of each character before outputting text on the canvas.
And 106, splicing at least one character to be displayed according to the character attribute information of each character to be displayed to generate a character string to be displayed, and acquiring character string typesetting information of the character string to be displayed.
After the character attribute information of each character to be displayed is obtained, each character to be displayed is spliced according to the character attribute information to generate a character string to be displayed, wherein the character string to be displayed specifically refers to text information displayed in a canvas layer, and the character string typesetting information is line feed information of the character string to be displayed when the character string to be displayed is output to the canvas layer, position information, width information of each character, information of surrounding matrixes outside characters, information of a text box and the like.
Generating the character strings to be displayed, and acquiring the character string typesetting information of the character strings to be displayed is convenient for providing accurate typesetting information when the character strings to be displayed are rendered subsequently.
Specifically, obtaining the character string typesetting information of the character string to be displayed includes S1062 to S1068:
S1062, acquiring character spacing information of adjacent characters to be displayed, character string attribute information of the character strings to be displayed and character string coordinate information of the character strings to be displayed.
In practical application, traversing each character to be displayed, acquiring character spacing information of two adjacent characters to be displayed, simultaneously acquiring character string attribute information of the character string to be displayed and character string coordinate information of the character string to be displayed, wherein the character spacing information of the two adjacent characters to be displayed is used for determining character spacing information of any two adjacent characters to be displayed, the character string attribute information comprises width information, line number information and the like of a text box corresponding to the character string, whether the character string to be displayed comprises a line-changing character "\n", and the character string coordinate information specifically refers to position information of the character string to be displayed in a canvas layer.
S1064, determining character string line changing information of the character strings to be displayed according to character width information of each character to be displayed and character spacing information between adjacent characters to be displayed.
After traversing each character to be displayed to obtain character spacing information between adjacent characters to be displayed, combining character width information of each character to be displayed to determine character string line changing information of the character string to be displayed, wherein the character string line changing information specifically refers to how many lines of display contents are needed for displaying the character string to be displayed.
Specifically, determining the character string line conversion information of the character string to be displayed according to the character width information of each character to be displayed and the character spacing information between adjacent characters to be displayed, including:
splicing each character to be displayed in sequence to obtain a character string to be displayed;
calculating the length of a spliced character string of the character string to be displayed according to the character width information of the character string to be displayed and the character spacing information between adjacent characters to be displayed;
and executing line feed operation on the character string to be displayed and generating character string line feed information under the condition that the length of the spliced character string is larger than a preset length threshold value.
In practical application, after obtaining the character width information of each character to be displayed and the character spacing information between adjacent characters to be displayed, each character to be displayed can be spliced according to the information to obtain a character string to be displayed, and in the splicing process, if the sum of the total width of the characters to be displayed spliced in a certain row and the character width information between the characters to be displayed in the row is greater than a preset length threshold value, line feed processing is required to be executed on the character string to be displayed. The preset length threshold may be set according to the length parameter, or may be determined according to the string attribute information in S1062, for example, if it is determined in the string attribute information that the width of the text box corresponding to the string to be displayed is t, the preset length threshold is set to t, and when the length of the string to be displayed exceeds t, a line-wrapping operation is performed on the string to be displayed, so as to generate and record the line information of the character change.
In the above string attribute information in S1062, it may also be determined whether the string to be displayed includes a line feed character, where the line feed character also triggers a line feed operation, so that the method further includes:
and executing line feed operation on the character string to be displayed and generating character string line feed information under the condition that the character to be displayed comprises line feed characters.
And if the character to be displayed comprises a line feed character, similarly executing line feed operation on the character string to be displayed according to the line feed character, and generating and recording the line information of the character change.
In the above-mentioned character string attribute information in S1062, the number of lines of text boxes corresponding to the character string to be displayed is further included, so that the method further includes:
counting the number of character strings of the character string to be displayed;
And hiding the character to be displayed after the line-feeding character at the tail line of the character string to be displayed under the condition that the number of the character string is larger than a preset number of lines threshold.
The number of character strings of the character strings to be displayed can be determined according to the line changing information of the character strings to be displayed, the number of the character strings specifically means that the character strings to be displayed have a total of a plurality of lines, when the number of the character strings is larger than a preset line number threshold value, the fact that the character strings to be displayed cannot be displayed can not be displayed completely is indicated, and the preset line number threshold value can be determined according to line number information in character string attribute information.
When the number of the character strings is larger than a preset number of line threshold, hiding the character to be displayed after the line feed character of the last line of the character strings, and adding an ellipsis at the end of the last line to indicate that the content display is not completed.
S1066, calculating character coordinate information of each character to be displayed according to the character string line feed information, the character string attribute information, the character string coordinate information and character spacing information of adjacent characters to be displayed.
After determining the row information of the character change, calculating the character coordinate information of each character to be displayed by combining the information such as the width information, the row number information and the like of the text box corresponding to the character string in the character string attribute information, wherein the character string coordinate information and the character spacing information of adjacent characters to be displayed are combined, for example, the character string coordinate information is p1 (x 1, y 1), the first character to be displayed is the same as the character string coordinate information and p1 (x 1, y 1), the character spacing information of the second character to be displayed and the first character to be displayed is q, the coordinate information of the second character to be displayed is determined to be p1 (x 2, y 1), wherein x2=x1+q, the first character to be displayed of the second row can be determined to be the 8 th character according to the character string row spacing information, the coordinate information of the 8 th character to be displayed is p1 (x 1, y 2=y1+h, and the like. And caching character coordinate information of each character to be displayed into a memory of the client for subsequent rendering processing.
S1068, generating character string typesetting information according to character spacing information of adjacent characters to be displayed, the character string attribute information, character string coordinate information, character width information of each character to be displayed, character coordinate information of each character to be displayed and the character string line feed information.
After the string line conversion information and the character coordinate information of each character to be displayed are obtained in the steps, the string typesetting information of the character string to be displayed is generated by combining the character spacing information, the character string attribute information (text box information corresponding to the character string, horizontal alignment, vertical alignment, character string) and the character string coordinate information of the adjacent characters to be displayed, and the string typesetting information is cached in a memory of a client for subsequent rendering processing.
In practical application, the character to be displayed and the style information of the character are received through the character typesetting module, and after the character typesetting module performs the processing of the steps, the character string typesetting information of the character string to be displayed is output.
And step 108, rendering in a target canvas layer according to the character strings to be displayed and the character string typesetting information to generate a rendering result corresponding to the character strings to be displayed.
After the character strings to be displayed are obtained and character string typesetting information is generated, rendering the character information in a target canvas layer (canvas), and rendering the characters to be displayed into the canvas.
Specifically, rendering the character strings to be displayed in the target canvas layer according to the character strings to be displayed and the character string typesetting information to generate rendering results corresponding to the character strings to be displayed, including S1082 to S1086:
s1082, creating a temporary canvas layer according to the text box information of the character string.
The character string typesetting information carries character string text box information corresponding to the character string, and a temporary canvas layer (temporary canvas) corresponding to the character string text box information is created according to the character string text box information. For example, a text box of a string is width height, and a temporary canvas is created that is width height.
In practical applications, the scaling information of the character string to be displayed is also considered, and the temporary canvas layer is created according to the character string text box information and the scaling information, for example, the width of the character string text box is width, the height is height, the horizontal scaling of the character to be displayed is scaleX, the vertical scaling is scaleY, the width of the temporary canvas is width of the scaleX, the height is height of the scaleY, and the temporary canvas layer is created according to the scaling information, so that the situation that the character is virtual when the character string to be displayed is rendered and the rendering result is generated can be ensured.
S1084, drawing the temporary rendering result corresponding to the character string to be displayed on the temporary canvas layer according to character spacing information of adjacent characters to be displayed, the character string attribute information, character string coordinate information, character width information of each character to be displayed and character coordinate information of each character to be displayed.
After the temporary canvas is created, a temporary rendering result corresponding to the character string to be displayed can be generated by drawing the character coordinate information of each character to be displayed, the character width information of each character to be displayed, the character spacing information of adjacent characters to be displayed, the character string attribute information and the character string coordinate information in the temporary canvas word by word according to the character coordinate information of each character to be displayed, the character width information of each character to be displayed, and the character string attribute information.
S1086, transmitting the temporary rendering result to a target canvas layer, and generating a rendering result corresponding to the character string to be displayed in the target canvas layer.
After the temporary rendering result is drawn in the temporary canvas, drawing the temporary rendering result onto a canvas finally displayed by combining character string coordinate information and character string attribute information (character string text box position information, character string rotation information to be displayed and character string scaling information to be displayed).
The method provided by the application realizes typesetting and rendering of characters based on canvas, supports complex character patterns, and can be conveniently integrated in canvas-based applications, such as a picture editor, a 2Dcanvas game, various webpage drawing tools and the like.
In practical application, the text is rendered into the target canvas by the text rendering module according to the information obtained in the steps 102 to 106.
In practical applications, the method may further edit the character rendering results rendered to the canvas layer, and the method further includes S1100-S1108.
S1100, receiving an editing instruction aiming at the rendering result.
After the rendering result is displayed and generated, the user can edit the rendering result, and the client receives an editing instruction for the rendering result.
The user triggers an edit instruction for the rendering result, which usually starts from clicking a character in a picture by the user, a cursor appears at the clicked position and flashes, and meanwhile, keyboard data is called, or the content of the character is selected in a frame mode, so that subsequent copying, modification or deletion is performed.
S1102, determining editing cursor position information in the rendering result in response to the editing instruction, and determining characters to be edited.
When a user clicks a rendering result, coordinate information in the browser when clicking occurs can be directly acquired, and a blinking cursor needs to appear near the text clicked by the editing instruction, that is, the relative coordinates of the click position information in the text content area need to be known, and the coordinate information of the browser is converted into editing cursor position information, that is, the editing cursor position information is relative coordinates relative to the rendering result.
Specifically, determining editing cursor position information in the rendering result in response to the editing instruction includes:
Acquiring a canvas layer coordinate system of the target canvas layer and a character coordinate system of a rendering result corresponding to the character string to be displayed;
determining a rotation matrix of the character coordinate system relative to the canvas layer coordinate system according to the canvas layer coordinate system and the character coordinate system;
determining canvas position coordinates of the editing instruction in the target canvas layer;
and determining the position information of the editing cursor according to the position coordinates of the canvas and the rotation matrix.
The coordinate system of the canvas layer is the coordinate system of the target canvas layer, and the character coordinate system is the coordinate system of the text box corresponding to the rendering result. Referring to fig. 2 below, fig. 2 is a schematic diagram of a rendering result coordinate system according to an embodiment of the present application. As shown in fig. 2, the user clicks on the "surprise stimulus" and a blinking cursor needs to appear between the "surprise stimulus" and the "parachuting travel" to indicate that the user has clicked on the text field portion. Firstly, a canvas layer coordinate system (X, Y) of a target canvas layer and a character coordinate system (X ', Y') of a rendering result are obtained. According to the character string rotation information to be displayed in the character string attribute information, the canvas layer coordinate system and the character coordinate system, a rotation matrix alpha of the character coordinate system relative to the canvas layer coordinate system can be calculated, and then a conversion relation between the canvas layer coordinate system and the character coordinate system can be established.
In the above steps, the user clicks the canvas, and can directly obtain the coordinates (X1, Y1) in the target canvas layer clicked when the clicking occurs, and then calculate the editing cursor position information (X1 ', Y1') of the clicking position in the character coordinate system according to the coordinates and the rotation matrix α.
In addition, the distance and the included angle between the clicking position and the X axis and the Y axis of the character coordinate system of the text box can be calculated, and the position information of the editing cursor can be calculated. The specific calculation of the distance between the clicking position and the X-axis and the Y-axis can be calculated by a vector method or a triangle method, and the calculation of the included angle between the clicking position and the X-axis and the Y-axis can be calculated by a trigonometric function. When calculating the position information of the editing cursor, if the included angle of the clicking position relative to the X axis of the character coordinate system is larger than 90 degrees, the X coordinate of the position of the editing cursor is negative, otherwise the X coordinate is positive, and if the included angle of the clicking position relative to the Y axis of the character coordinate system is larger than 90 degrees, the Y coordinate of the position of the editing cursor is negative, otherwise the Y coordinate is positive. And combining the calculated distances between the clicking position and the X axis and the Y axis to calculate the position information of the editing cursor.
In practical application, when receiving an editing instruction, a character to be edited is generally determined according to the editing instruction, where the character to be edited specifically refers to a character to be edited in the editing process, and determining the character to be edited in the rendering result in response to the editing instruction includes:
acquiring cursor starting information and cursor ending information in the editing instruction;
And determining the character to be edited according to the cursor starting information and the cursor ending information.
In the above steps, after the edit cursor position information of the character coordinate system corresponding to the click position of the edit command with respect to the rendering result is obtained, the cursor start information (startCursorIndex) and the cursor end information (endCursorIndex) are adopted to replace the cursor position, when startCursorIndex is equal to endCursorIndex, the screen appears that one cursor is blinking, when startCursorIndex is smaller than endCursorIndex, the screen represents that the characters are framed at this time, when startCursorIndex and endCursorIndex are both 0, the screen represents that the cursor is blinking on the left side of the first character, when startCursorIndex and endCursorIndex are both 2, the screen represents that the cursor is blinking on the right side of the second character, when startCursorIndex =0, endcursolindex=4, the screen represents that the first 4 characters are framed, and when startCursorIndex =3, endcursolindex=6, the screen represents that the 4 th to 6 th characters are framed.
S1104, calling an input component, receiving characters to be updated through the input component, and determining character attribute information of the characters to be updated.
The determination of the response of the edit cursor position information to the click is completed in the above steps, the response to the character input needs to be processed next, and since the canvas is not an input component in the browser and is not suitable for responding to the input event of the keyboard, a native input component is required to respond to the input event, a hidden textarea input component needs to be added on the interface when the input state is entered, the text content in textarea is set to the character to be edited which needs to be edited, and the textarea component is set to the focus state (state of accepting input). Because the focus state of textarea is canceled when the canvas layer is clicked for selecting a field, the textarea component needs to be reset to the focus state every time after the canvas layer is clicked.
The character to be updated is received in textarea component and character attribute information, such as font, color, size, etc., of the character to be updated is obtained.
S1106, replacing the character to be edited with the character to be updated to generate a character string to be updated, and updating the character string typesetting information according to the character attribute information of the character to be updated to obtain character string typesetting information to be updated.
The character to be updated is replaced by the character to be edited in textarea in textarea to generate a character string to be updated, and when the content in textarea is changed, the text and the selected content need to be rearranged and rendered.
In practical application, in order to avoid the influence of re-rendering on the original rendering picture, when entering an editing state, a temporary canvas with the same size as a text box corresponding to a rendering result needs to be created to cover the text box, text information of the original rendering picture is transferred into the temporary canvas, and in the editing process, only the content in the temporary canvas is edited, so that pollution to the original rendering picture is avoided.
S1108, rendering in the target canvas layer according to the character string to be updated and the character string typesetting information to be updated to generate a rendering result corresponding to the character string to be updated.
After the character string to be updated and the typesetting information of the character string to be updated are obtained, rendering in the temporary canvas to generate a corresponding temporary rendering result, acquiring the temporary rendering result, transmitting the temporary rendering result to the target canvas layer, replacing the original rendering result, and generating a new rendering result corresponding to the character string to be updated. The original rendering picture is edited, characters are typeset, rendered and edited, characters, cursors, selections and corresponding mouse operation events are rendered by using canvas, keyboard input operation and shortcut key operation are received by using textarea, and the character processing process is completed by combining canvas and textarea.
The character processing method provided by the embodiment of the application comprises the steps of receiving at least one character to be displayed, determining character attribute information of each character to be displayed, splicing the at least one character to be displayed according to the character attribute information of each character to be displayed to generate a character string to be displayed, acquiring character string typesetting information of the character string to be displayed, and rendering and generating rendering results corresponding to the character string to be displayed in a target canvas layer according to the character string to be displayed and the character string typesetting information.
The following describes a character processing method provided by the present application in the application of a web picture editor by taking fig. 3 as an example. Fig. 3 shows a process flow chart of a character processing method applied to a web picture editor according to an embodiment of the present application, which specifically includes the following steps:
step 302, receiving the character to be displayed, namely ' today ' weather is good ', and simultaneously receiving character style information of the character to be displayed.
And 304, dividing the character to be displayed into individual characters, and measuring the width of each character by using a measureText method of a web canvas to obtain character width information of each character to be displayed.
Step 306, traversing each character, and splicing each character to be displayed in sequence according to the width information, the word spacing information, the maximum line width and the line feed symbols in the characters to generate a character string and corresponding line feed information.
And 308, calculating the position information of each character to be displayed and text box information corresponding to the character string according to the line feed information, the character spacing information, the horizontal alignment information, the numerical alignment information and the character positioning coordinate information.
Step 310, creating a first temporary canvas with a corresponding size according to the text box information and the text scaling information.
And 312, drawing words in the first temporary canvas according to the typesetting information and the character style information, and displaying the rendering result in the first temporary canvas in the target canvas by combining the text box information, the text scaling information and the text rotation information after drawing is completed.
Step 314, receiving an editing instruction for the rendering result.
Step 316, in response to the editing instruction, determining that the position of the editing cursor is in front of 'good' in the rendering result, simultaneously determining that the character to be edited is 'good', and creating a second temporary canvas.
Step 318, call textarea the component, set "good" into the component, modify "good" to "clear" in response to keyboard instructions, and synchronize the content in textarea component to the second temporary canvas in real time.
And 320, after the editing is finished, synchronizing the content in the second temporary canvas to the target canvas, generating a new rendering result, and finishing the editing of the original rendering result.
The character processing method provided by the embodiment of the application is applied to a web picture editor, provides a rendering and editing response flow, realizes character typesetting and character editing of characters based on canvas, supports complex character patterns and editing under the conditions of text rotation and scaling, and can be conveniently integrated into canvas application to edit and typeset the characters.
Corresponding to the above character processing method embodiment, the present application further provides a character processing device embodiment, and fig. 4 shows a schematic structural diagram of a character processing device according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
A receiving module 402 configured to receive at least one character to be displayed;
A determining module 404 configured to determine character attribute information of each character to be displayed;
The splicing module 406 is configured to splice the at least one character to be displayed according to the character attribute information of each character to be displayed to generate a character string to be displayed, and acquire character string typesetting information of the character string to be displayed;
the rendering module 408 is configured to render in the target canvas layer according to the character string to be displayed and the character string typesetting information to generate a rendering result corresponding to the character string to be displayed.
Optionally, the determining module 404 is further configured to:
Measuring character width information of each character to be displayed;
And acquiring character drawing information of each character to be displayed.
Optionally, the stitching module 406 is further configured to:
acquiring character spacing information of adjacent characters to be displayed, character string attribute information of the character strings to be displayed and character string coordinate information of the character strings to be displayed;
Determining character string line changing information of the character strings to be displayed according to character width information of each character to be displayed and character spacing information between adjacent characters to be displayed;
Calculating character coordinate information of each character to be displayed according to the character string line feed information, the character string attribute information, the character string coordinate information and character spacing information of adjacent characters to be displayed;
Generating character string typesetting information according to character spacing information of adjacent characters to be displayed, the character string attribute information, character string coordinate information, character width information of each character to be displayed, character coordinate information of each character to be displayed and the character string linefeed information.
Optionally, the stitching module 406 is further configured to:
splicing each character to be displayed in sequence to obtain a character string to be displayed;
calculating the length of a spliced character string of the character string to be displayed according to the character width information of the character string to be displayed and the character spacing information between adjacent characters to be displayed;
and executing line feed operation on the character string to be displayed and generating character string line feed information under the condition that the length of the spliced character string is larger than a preset length threshold value.
Optionally, the stitching module 406 is further configured to:
and executing line feed operation on the character string to be displayed and generating character string line feed information under the condition that the character to be displayed comprises line feed characters.
Optionally, the apparatus further includes:
the statistics module is configured to count the number of character strings of the character strings to be displayed;
And the hiding module is configured to hide the character to be displayed after the line feed character of the last line of the character string to be displayed under the condition that the number of the character string is larger than a preset number threshold value.
Optionally, the rendering module 408 is further configured to:
Creating a temporary canvas layer according to the text box information of the character string;
Drawing and generating a temporary rendering result corresponding to the character string to be displayed on the temporary canvas layer according to character spacing information of adjacent characters to be displayed, the character string attribute information, character string coordinate information, character width information of each character to be displayed and character coordinate information of each character to be displayed;
And transmitting the temporary rendering result to a target canvas layer, and generating a rendering result corresponding to the character string to be displayed in the target canvas layer.
Optionally, the rendering module 408 is further configured to:
acquiring character string scaling information;
And creating a temporary canvas layer according to the character string text box information and the character string scaling information.
Optionally, the apparatus further includes:
an editing instruction receiving module configured to receive an editing instruction for the rendering result;
A cursor determining module configured to determine editing cursor position information in the rendering result in response to the editing instruction, and determine a character to be edited;
the calling module is configured to call an input assembly, receive characters to be updated through the input assembly and determine character attribute information of the characters to be updated;
The character string updating module is configured to replace the character to be edited with the character to be updated, generate a character string to be updated, update the character string typesetting information according to the character attribute information of the character to be updated, and obtain character string typesetting information to be updated;
And the editing and rendering module is configured to render and generate a rendering result corresponding to the character string to be updated in the target canvas layer according to the character string to be updated and the character string typesetting information to be updated.
Optionally, the cursor determination module is further configured to:
Acquiring a canvas layer coordinate system of the target canvas layer and a character coordinate system of a rendering result corresponding to the character string to be displayed;
determining a rotation matrix of the character coordinate system relative to the canvas layer coordinate system according to the canvas layer coordinate system and the character coordinate system;
determining canvas position coordinates of the editing instruction in the target canvas layer;
and determining the position information of the editing cursor according to the position coordinates of the canvas and the rotation matrix.
Optionally, the cursor determination module is further configured to:
acquiring cursor starting information and cursor ending information in the editing instruction;
And determining the character to be edited according to the cursor starting information and the cursor ending information.
The character processing device provided by the embodiment of the application comprises at least one character to be displayed, character attribute information of each character to be displayed is determined, the character to be displayed is spliced to generate a character string to be displayed according to the character attribute information of each character to be displayed, character string typesetting information of the character string to be displayed is obtained, a rendering result corresponding to the character string to be displayed is rendered and generated in a target canvas layer according to the character string to be displayed and the character string typesetting information.
The above is a schematic scheme of a character processing apparatus of the present embodiment. It should be noted that, the technical solution of the character processing apparatus and the technical solution of the character processing method belong to the same conception, and details of the technical solution of the character processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the character processing method.
Fig. 5 illustrates a block diagram of a computing device 500, provided in accordance with an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530 and database 550 is used to hold data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 500, as well as other components not shown in FIG. 5, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 5 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
Wherein the processor 520, when executing the computer instructions, implements the steps of the character processing method.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the character processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the character processing method.
An embodiment of the application also provides a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the character processing method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the character processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the character processing method.
The foregoing describes certain embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the application disclosed above are intended only to assist in the explanation of the application. Alternative embodiments are not intended to be exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and the full scope and equivalents thereof.
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111007595.XA CN113705156B (en) | 2021-08-30 | 2021-08-30 | Character processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111007595.XA CN113705156B (en) | 2021-08-30 | 2021-08-30 | Character processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113705156A CN113705156A (en) | 2021-11-26 |
CN113705156B true CN113705156B (en) | 2024-12-03 |
Family
ID=78657209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111007595.XA Active CN113705156B (en) | 2021-08-30 | 2021-08-30 | Character processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113705156B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114662450B (en) * | 2022-03-08 | 2025-03-11 | 阿里巴巴(中国)有限公司 | Graphic display method and electronic device |
CN115870629B (en) * | 2023-02-21 | 2023-05-16 | 飞天诚信科技股份有限公司 | Single-line font laser engraving method, device, equipment and medium |
CN115988170B (en) * | 2023-03-20 | 2023-08-11 | 全时云商务服务股份有限公司 | Method and device for clearly displaying English characters in real-time video combined screen in cloud conference |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127035A (en) * | 2007-10-11 | 2008-02-20 | 金蝶软件(中国)有限公司 | Method and device drafting character string at targeted area |
CN104461483A (en) * | 2013-09-16 | 2015-03-25 | 北大方正集团有限公司 | Font rendering method and device, rendering platform client and server |
CN106610926A (en) * | 2015-10-27 | 2017-05-03 | 北京国双科技有限公司 | Data display method and device for Echarts (Enterprise Charts) |
CN108460003A (en) * | 2018-02-02 | 2018-08-28 | 广州视源电子科技股份有限公司 | Text data processing method and device |
CN109978972A (en) * | 2019-03-20 | 2019-07-05 | 珠海天燕科技有限公司 | A kind of method and device of copy editor in picture |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259621A (en) * | 1999-03-09 | 2000-09-22 | Matsushita Electric Ind Co Ltd | Document editor, its edition method and recording medium storing document edition program |
US20070162842A1 (en) * | 2006-01-09 | 2007-07-12 | Apple Computer, Inc. | Selective content imaging for web pages |
CN103336690B (en) * | 2013-06-28 | 2017-02-08 | 优视科技有限公司 | HTML (Hypertext Markup Language) 5-based text-element drawing method and device |
CN105095157B (en) * | 2014-04-18 | 2018-06-22 | 腾讯科技(深圳)有限公司 | character string display method and device |
CN107172474B (en) * | 2017-03-31 | 2020-02-04 | 武汉斗鱼网络科技有限公司 | Method and device for drawing bullet screen by using canvas |
CN107391159B (en) * | 2017-08-09 | 2020-10-23 | 海信视像科技股份有限公司 | Method and device for realizing characters of UI text box of smart television |
CN108279964B (en) * | 2018-01-19 | 2021-09-10 | 广州视源电子科技股份有限公司 | Method and device for realizing covering layer rendering, intelligent equipment and storage medium |
CN111259301B (en) * | 2020-01-19 | 2023-05-02 | 北京飞漫软件技术有限公司 | Method, device, equipment and storage medium for rendering elements in HTML page |
CN112507669B (en) * | 2020-12-07 | 2024-08-06 | 深圳市欢太科技有限公司 | Text processing method and device, storage medium and electronic equipment |
CN112614211B (en) * | 2020-12-29 | 2023-09-22 | 广州光锥元信息科技有限公司 | Method and device for text and image self-adaptive typesetting and animation linkage |
-
2021
- 2021-08-30 CN CN202111007595.XA patent/CN113705156B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127035A (en) * | 2007-10-11 | 2008-02-20 | 金蝶软件(中国)有限公司 | Method and device drafting character string at targeted area |
CN104461483A (en) * | 2013-09-16 | 2015-03-25 | 北大方正集团有限公司 | Font rendering method and device, rendering platform client and server |
CN106610926A (en) * | 2015-10-27 | 2017-05-03 | 北京国双科技有限公司 | Data display method and device for Echarts (Enterprise Charts) |
CN108460003A (en) * | 2018-02-02 | 2018-08-28 | 广州视源电子科技股份有限公司 | Text data processing method and device |
CN109978972A (en) * | 2019-03-20 | 2019-07-05 | 珠海天燕科技有限公司 | A kind of method and device of copy editor in picture |
Also Published As
Publication number | Publication date |
---|---|
CN113705156A (en) | 2021-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI808393B (en) | Page processing method, device, apparatus and storage medium | |
CN113705156B (en) | Character processing method and device | |
CN110096277B (en) | Dynamic page display method and device, electronic equipment and storage medium | |
US7920149B2 (en) | Rollback in a browser | |
US7631252B2 (en) | Distributed processing when editing an image in a browser | |
CN111209721A (en) | Bitmap font implementation method, device, electronic device and storage medium | |
CN113411664B (en) | Video processing method and device based on sub-application and computer equipment | |
CN113326043B (en) | Webpage rendering method, webpage manufacturing method and webpage rendering system | |
CN111694493B (en) | Webpage screenshot method, computer equipment and readable storage medium | |
CN114003160B (en) | Data visual display method, device, computer equipment and storage medium | |
US10891801B2 (en) | Method and system for generating a user-customized computer-generated animation | |
US20170186207A1 (en) | Information processing method and terminal, and computer storage medium | |
CN114297546A (en) | Method for loading 3D model to realize automatic thumbnail generation based on WebGL | |
WO2024251092A1 (en) | Image processing method and apparatus, and device, storage medium and program product | |
CN111651969B (en) | style migration | |
CN117112826A (en) | Image generation method, device, computer equipment and storage medium | |
CN112560397B (en) | Drawing method, drawing device, terminal equipment and storage medium | |
CN115858069A (en) | Page animation display method and device | |
CN111460770B (en) | Method, device, equipment and storage medium for synchronizing element attributes in document | |
CN113703699B (en) | Real-time output method and device for electronic file | |
CN118819504B (en) | Drawing method and device for component, electronic equipment and storage medium | |
CN116883548B (en) | Method and device for conveniently adding and modifying electronic image in electronic document | |
US11599599B1 (en) | Emulating a transparency effect for a display element | |
CN110147511B (en) | Page processing method and device, electronic equipment and medium | |
CN116226422A (en) | Animation data storage method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |