Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The following description and explanation of several terms involved in the present application:
SVGA (Scalable Vector Graphics Animetion, scalable vector graphics animation) is a cross-platform open source animation format, and is compatible with iOS/Android/Web.
WebGL (web Graphics Library ), which is a 3D drawing protocol, allows JavaScript and OpenGL ES 2.0 to be combined together, by adding a JavaScript binding of OpenGL ES 2.0, webGL can provide hardware 3D accelerated rendering for HTML5 Canvas, web developers can more smoothly show 3D scenes and models in a browser by means of a system graphics card, and can create complex navigation and data visualization.
Canvas is used to draw graphics on a web page. HTML5 canvas draws images on web pages using JavaScript. The canvas is a rectangular area, each pixel of which can be controlled. Canvas has a variety of ways to draw paths, rectangles, circles, characters, and add images.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is described. The method provided by the embodiment of the application is applied to the animation display scene.
With the development of live broadcast business, in order to improve the display effect of live broadcast pages, cool animation enhancement attractions can be displayed on the live broadcast pages. Fig. 1 is a schematic diagram of an application environment to which the animation display provided in the embodiment of the present application is applicable, as shown in fig. 1, a related engine, such as an animation display engine, in a browser 101 is used to obtain a sequence frame of related animation, and animation elements in the sequence frame are transformed and then rendered on a page to obtain a page animation, and then sent to a terminal device 102 for display. The terminal device 102 includes a smart phone, a notebook computer, a desktop computer, a tablet, and other devices capable of browsing pages.
A cool animation is often seen on a general web page or video interface. In live video animation, an SVGA (Scalable Vector Graphics Animetion, i.e. scalable vector graphics animation) mode is generally adopted to draw the animation. The principle of SVGA animation is that a file in SVG format is parsed into data such as key frames, vector paths, patterns and the like, and then the data acts on picture resources.
When playing SVGA animation, can only play the animation according to the preset animation effect, and the user can not perform displacement, scaling, rotation and other transformations on the SVGA animation, so that the animation display effect is single, and if the SVGA animation display effect is wanted to be changed, a developer is required to re-write the display code of the SVGA animation, so that the workload is large and complex.
For example, a game for living room is developed. The balloon is continuously lifted on the browser page, moves on the page, and plays SVGA animation, such as the animation of balloon bursting when the user clicks the balloon. If the conventional SVGA animation is used, when there are too many balloon bursts in the same screen, since each time the SVGA animation is played, it needs to be parsed again, and each SVGA animation element generates a corresponding visible canvas in the page, the more visible canvases in the page, the more consuming the performance, resulting in the final catton, and the developer hopes that the SVGA animation that is done to the balloon burst can be edited during the playing process, amplified, rotated, etc., the conventional SVGA player is not supported.
The application provides an animation display method, an animation display device, animation display equipment and a computer readable storage medium, which aim to solve the technical problems in the prior art.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 2 is a flowchart of an animation display method according to an embodiment of the present application, which can be applied to an animation display engine, such as an animation display engine of a browser.
As shown in fig. 2, the animation demonstration method may include the steps of:
s210, acquiring a sequence frame of the SVGA animation.
The sequence frame of the SVGA animation is obtained from the address of the SVGA animation stored in advance. The SVGA animation can be a local animation, an animation downloaded in advance from a network, or an animation which is self-made.
Before step S210, the method may further include the steps of:
S200A1, loading SVGA animation and performing silent playing.
And acquiring a storage address of the SVGA animation, and loading a designated file SVGA by the SVGA player, wherein the designated file SVGA is the SVGA animation for rendering on a page.
And utilizing the SVGA player to carry out silent playing on the SVGA animation.
S200A2, calling an image data interface to record the SVGA animation at a preset recording rate in the process of quietly playing the SVGA animation, and obtaining and caching a sequence frame of the SVGA animation.
In this embodiment, the image data interface may be a GETIMAGEDATA interface in a Canvas.
Since the original playing rate of the SVGA animation may not be consistent with the target playing rate of the animation finally displayed by the user, in this embodiment, the SVGA animation is recorded at a preset recording rate by calling the GETIMAGEDATA interface in the Canvas during the process of silent playing of the SVGA animation, so as to obtain a sequence frame of the SVGA animation and cache the sequence frame, where the preset recording rate is consistent with the target playing rate of the finally displayed animation.
For example, the original play rate of SVGA animation is 20 frames/second, while the target play rate of the animation that the user finally presents is 60 frames/second. Therefore, the new SVGA animation is re-recorded at the same preset recording rate as the target playing speed for caching, so that the new SVGA animation is conveniently and subsequently embedded into other canvas animations based on canvas, and the consistency of the playing rate of the new SVGA animation and the playing rate of the subsequent canvas animations is ensured. When animation rendering is needed, SVGA sequence frames are obtained from the buffer memory, so that the waste of computing resources is avoided
S220, configuring transformation parameters of corresponding animation elements in each sequence frame, and calculating to obtain the pose corresponding to the animation elements according to the transformation parameters.
Based on the layout of the animation elements on the presentation page, that is, the layout of the position information, the height information, and the like of the animation elements on the page, in this embodiment, transformation parameters of the animation effect of the animation elements are configured by writing animation transformation codes.
The following exemplary configuration of transformation parameters for a certain animation element is given:
origin={x:100,y:100,width:100,height:100}
animat={x:200,y:200,width:50,height:50}
animatOption={time:5000,timeFn:linear}
wherein, origin is the original coordinate position, animat is the animation end coordinate position.
AnimatOption is an animation option that includes duration (time) and animation speed type (timeFn, typically 3, constant motion, slow down motion, and slow up motion).
The animation effect represented by the transformation parameters is that the animation elements are moved 100 units to the right/below in 5 seconds at a uniform speed from the original coordinates, and the area is reduced by 0.5 times.
Of course, in other embodiments, different transformation parameters may be set according to actual situations to achieve different animation effects.
And acquiring pre-written animation transformation codes, and analyzing the animation transformation codes to obtain transformation parameters preset for each animation element. The transformation parameters comprise at least one of animation scaling parameters, rotation parameters, movement tracks, animation starting positions, animation ending positions, animation movement speeds, movement directions and preset display durations.
In this embodiment, the rendered pose of the animation element on the page is different over time, the pose including position and pose. The position refers to the position where the animation element is rendered at a certain moment, and the gesture refers to the size, the rotation angle and the like of the animation element after scaling treatment when the animation element is rendered at a certain moment.
For example, the transformation parameters of the preset animation elements correspond to the transformation effect that the page animation is played at the speed of 60 frames per second, for example, the original position coordinate of the balloon animation element is 0, and 240 units of movement to the right within 2 seconds are needed, and then 2 units of movement are needed for each frame. If the preset display duration is 2 seconds, the current actual display duration is 0.5 seconds, and the remaining time is 1.5 seconds, the animation element is depicted at the position of the position coordinate 60, and if the corresponding remaining time is 0.5 seconds, the current animation element is calculated to be depicted at the position 180.
The above examples are described in terms of displacement transformation, and the scaling transformation and rotation transformation of the animation element are the same in principle, and the positions corresponding to the displacement transformation and the gestures corresponding to the scaling transformation and rotation transformation are combined to obtain the rendering pose of the current animation element.
And S230, respectively rendering the animation elements into corresponding pre-rendering canvases through drawing interfaces according to the poses to obtain pre-rendering animations.
The pre-rendering canvas and the target canvas are canvases with the same size, wherein the pre-rendering canvas is invisible canvas, and the target canvas is visible canvas.
In this embodiment, animation elements are respectively rendered onto corresponding pre-rendering canvas by combining with their corresponding poses through a drawing interface, such as webGL interface, that is, SVGA animation is drawn onto a pre-rendering canvas by the webGL interface, so as to obtain pre-rendering animation.
Furthermore, a plurality of animation elements displayed at the same time can be drawn in the same pre-rendering canvas through a drawing interface, so as to obtain the pre-rendering animation.
For example, a plurality of "balloon" animations are shown distributed at different locations on the same page, and when the page is shown, a plurality of "balloon" animations, such as "balloon 1", "balloon 2" and "balloon 3", are shown simultaneously. At this time, "balloon 1", "balloon 2" and "balloon 3" may all be drawn onto the same invisible pre-rendering canvas to obtain the pre-rendering animation. In this embodiment, the prerendered animation may be a frame independent animation.
And S240, rendering the pre-rendering animation in each pre-rendering canvas into a target canvas for display.
And rendering the pre-rendering animation drawn on the pre-rendering canvas as a whole on the target canvas for presentation.
In one embodiment, the rendering the pre-rendering animation in each of the pre-rendering canvases to the target canvas for playing in step S240 may include the following steps:
S2401, calling an image drawing interface of a target canvas, and rendering the pre-rendering animation rendered into the pre-rendering canvas onto the corresponding target canvas.
The image rendering interface may be a drawImage interface that renders images, canvases, or videos on a target canvas using a drawImage () method. The drawmage () method is also capable of drawing certain portions of an image, and/or increasing or decreasing the size of an image.
In this embodiment, a drawImage interface of a target canvas is invoked to render a rendering animation drawn on a pre-rendering canvas onto a corresponding target canvas, which is a visible canvas. It is understood that drawing multiple animation elements on the same invisible pre-rendering canvas results in an overall frame of pre-rendering animation (which may be considered as an image) that is then transferred to the visible target canvas.
S2402, displaying the animation through the target canvas.
The size of the pre-rendering canvas is the same as that of the target canvas, so that the pre-rendering animation on the pre-rendering canvas is directly transferred to the target canvas for demonstration.
According to the animation display method provided by the embodiment, through obtaining the sequence frames of the SVGA animation, configuring the transformation parameters of the animation elements corresponding to each sequence frame, calculating according to the transformation parameters to obtain the positions and postures corresponding to each animation element, respectively rendering the animation elements into the corresponding pre-rendering canvas through the drawing interface according to the positions and postures to obtain the pre-rendering animation, and rendering the pre-rendering animation in each pre-rendering canvas into the target canvas for display. According to the technical scheme, the additional transformation animation is set when the SVGA animation is played, and a plurality of SVGA animations needing to be played simultaneously are rendered to the same target canvas for display, so that the display forms of the animations are enriched, the rendering of the target canvas is reduced, and the consumption of a CPU (Central processing Unit) during animation rendering is reduced.
In the related art, a conventional SVGA does not provide an interface for a developer to perform secondary editing on SVGA animation to realize secondary animation such as displacement, scaling or rotation, and in addition, a conventional SVGA animation player is utilized to play SVGA animation, each SVGA animation element played will generate a new Canvas, and when a plurality of SVGA animation elements are played in a page at the same time, the plurality of Canvas will cause the performance waste of playing equipment, even the blocking. According to the technical scheme, the SVGA animation can be edited secondarily, additional animations such as displacement, scaling or rotation can be set automatically when the SVGA is played, and the problem that a developer needs to export a SVGA file again to modify a SVGA file to reedit the SVGA animation effect when the developer needs to do some animations is avoided. And drawing a plurality of animation elements together by using the invisible prerendering canvas and then rendering the animation elements on the visible target canvas, so that the number of the target canvases is reduced, and the consumption of a CPU (central processing unit) during animation demonstration is reduced.
In order to more clearly illustrate the technical solution of the present application, more embodiments are provided below for a rendering method based on page animation.
In an embodiment, the calculating, in step S220, the pose corresponding to the animation element according to the transformation parameter may include the following steps:
s2201, performing animation transformation on the animation elements according to the transformation parameters.
Wherein the animation transformation comprises at least one of animation scaling, animation rotation and animation movement, and the transformation parameters comprise at least one of animation scaling parameters, rotation parameters, movement tracks, animation start positions, animation end positions, animation movement speeds, movement directions and preset presentation durations.
Each animation element is provided with a corresponding preset display duration. The animation effect corresponding to the transformation parameters of the balloon animation element is that the balloon floats from bottom to top, and disappears after being displayed for 5 seconds. In this embodiment, the preset display duration refers to the maximum time length for displaying the animation element on the page, and the actual display duration refers to the time length for displaying the current animation element on the page.
If the actual display duration of the current animation element is detected to reach the preset display duration, the current animation element is removed from the element queue, for example, a balloon animation element is removed from the element queue, and the next animation element, for example, an eight-treasure box, is read. If the actual display time length of the current animation element is detected to not reach the preset display time length, calculating a transformation matrix corresponding to the current animation element according to the residual time corresponding to the preset display time length, carrying out animation transformation on the animation element according to the transformation matrix, carrying out animation transformation on the animation element by utilizing the transformation matrix,
For example, describing with displacement transformation, the current animation element has an original coordinate position of (0, 0) and an end coordinate position of (100, 0), and the preset display duration (i.e., the movement duration) is 5 seconds, and moves at a constant speed. If the actual display duration is 2 seconds, the remaining display duration is 3 seconds, and at this time, the current position is calculated as (40, 0).
Similarly, the original deflection angle of the current animation element is 0 degrees, the ending deflection angle is 90 degrees, the preset display duration is 5 seconds, and the animation element deflects at a constant speed. If the actual display duration is 2 seconds, the remaining display duration is 3 seconds, and at this time, the current deflection angle is calculated to be 32 °.
The transformation matrix of the current animation element at the current time can be calculated by presetting the residual time corresponding to the display duration, wherein the transformation matrix comprises a scaling matrix, a displacement matrix, a rotation matrix and the like.
S2202, determining the pose of the animation element according to the animation transformation result of the animation element.
In this embodiment, a transformed corresponding transformation matrix is obtained according to the animation transformation result of the animation element, where the transformation matrix may include a scaling matrix, a displacement matrix, a rotation matrix, and the like. Further, the calculated result obtained by multiplying the transformation matrices is used as the pose of the animation element. In the present embodiment, if scaling, displacement, or rotation does not occur, the corresponding transformation matrix is denoted by "1".
In an embodiment, in step S230, the animation elements are respectively rendered into corresponding pre-rendering canvases through drawing interfaces according to the pose, so as to obtain a pre-rendering animation, which may include the following steps:
s2301, determining a rendering area of the animation element on a pre-rendering canvas according to the pose.
In this embodiment, the positions of the animation elements are different, and the rendering areas occupied by the animation elements on the pre-rendering canvas are also different, for example, the rendering areas of the animation elements on the pre-rendering canvas are determined according to the information related to the positions of the animation elements, such as the size, the rotation angle, the shape and the like of the animation elements.
And determining the position coordinates of key points in the animation elements according to the poses of the animation elements, and determining the rendering area of the animation elements on the pre-rendering canvas according to the position coordinates of the key points. The area enclosed by the connecting line according to the key points on the outline of the animation element is determined as the rendering area on the pre-rendering canvas.
In an embodiment, the determining, in step S2301, a rendering area of the animation element on the pre-rendering canvas according to the pose may include the steps of:
(1) And determining the minimum circumscribed rectangle of the animation element according to the pose.
The minimum bounding rectangle (minimum bounding rectangle, MBR) refers to the maximum range of a plurality of two-dimensional shapes (such as points, lines and polygons) expressed in two-dimensional coordinates, namely, the rectangle with the lower boundary defined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of the given two-dimensional shape.
In this embodiment, four vertices corresponding to the animation element are determined according to the pose, and the area surrounded by the four vertices which are sequentially connected is determined as the minimum external rectangle of the animation element.
(2) The minimum bounding rectangle is determined as a rendering area of the animation element on a pre-rendering canvas.
In this embodiment, the area occupied by the smallest bounding rectangle of the animation element on the pre-rendering canvas is determined as the rendering area of the animation element on the pre-rendering canvas,
S2302, rendering at least two animation elements to a rendering area corresponding to the same pre-rendering canvas through a drawing interface to obtain the pre-rendering animation.
And calling a drawing interface, such as a webGL interface, to render at least two animation elements to corresponding rendering areas in the same prerendering canvas according to the poses determined by the transformation parameters, thereby obtaining prerendering animation. A prerendered animation may be understood as an animation that is rendered on an invisible canvas and is not actually presented to the user.
In this embodiment, a page animation based on CSS is drawn based on webGL by using a graphics algorithm, and multiple animation elements are combined onto one canvas to be presented, so that the same canvas can draw multiple animation elements at one time, and the page redrawing rate is reduced, wherein the page redrawing rate refers to the number of times of drawing per second when the animation elements change, the animation elements are rendered onto the page by using a CPU (Central Processing Unit, central processor)/GPU (Graphics Processing Unit, graphics processor).
Fig. 3 is a flowchart of a pre-rendering animation generation method according to an embodiment of the present application, as shown in fig. 3, in an embodiment, at least two animation elements are rendered onto a rendering area corresponding to the same pre-rendering canvas through a drawing interface in step S2302 to obtain a pre-rendering animation, which may include the following steps:
s301, a webGL interface is called, and four vertexes corresponding to the animation elements are rendered on a rendering area corresponding to the pre-rendering canvas through the webGL interface.
WebGL enables web pages to be 3D rendered in a canvas using an API based on OpenGL ES 2.0 in a browser supporting HTML < canvas > tags, without using any plug-ins, the WebGL elements may be mixed with other HTML elements and may be combined with other parts of the page or the page background.
Generally, the shape of the animation element is generally rectangular, such as a rectangle and a square. After determining that the animation element is located at four vertexes on a diagonal line, determining an area surrounded by connecting the four vertexes in turn as an area occupied by the animation element.
In this embodiment, a webGL interface is called, and four vertices corresponding to the animation element are first rendered on a rendering area corresponding to the pre-rendering canvas by using the webGL interface.
S302, after four vertexes are rendered, rendering texture images of the animation elements in an area surrounded by the four vertexes.
All animation elements in the rendering queue contain corresponding texture images.
Further, after four vertexes are rendered on the pre-rendering canvas, the texture image of the animation element is obtained, and the texture image of the animation element is rendered in an area surrounded by connecting lines of the four vertexes.
Specifically, the rendering flow of each animation element may include at least one of the following steps:
(1) Setting 4 vertex original coordinates through webGL.
(2) Setting 4 vertex displacement matrixes through webGL. UniformMatrix4fv to realize displacement transformation of animation elements;
(3) 4 vertex rotation matrixes are set through webGL. UniformMatrix4fv to realize rotation transformation of animation elements;
(4) 4 vertex scaling matrixes are set through webGL. UniformMatrix4fv to realize scaling transformation of animation elements;
(5) Binding the texture rendered by the current animation element into the vertex scope by webGL.
(6) Drawing 4 vertices into the canvas by webgl. Drawrays, textures will automatically attach within the vertex range.
S303, drawing texture images corresponding to at least two animation elements on the same pre-rendering canvas to obtain the pre-rendering animation.
Based on the above example, the vertices of at least two animation elements are rendered onto the same pre-rendering canvas, and the pre-rendering animation based on the pre-rendering canvas is obtained in a rendering area surrounded by the vertex connecting line of the texture images corresponding to at least two animation elements.
Optionally, each frame page corresponds to a rendering queue, and when all elements in the rendering queue are rendered, the next frame page animation is rendered.
According to the animation display method provided by the embodiment, the plurality of animation elements are drawn on the same pre-rendering canvas to obtain the pre-rendering animation, and then the pre-rendering animation is rendered on the target canvas to be displayed on the page after being rendered, so that the number of the target canvases when the page is rendered is reduced, the consumption of CPU (Central processing Unit) in animation processing in a rendering engine is reduced, and page clamping is avoided.
In order to explain the present application in more detail, the following exemplary embodiments of the present application are described with reference to fig. 4A and 4B. Fig. 4A is a flowchart of an initialization phase of an animation display provided by an embodiment of the present application, and fig. 4B is a flowchart of a rendering display phase of an animation display provided by an embodiment of the present application.
In this embodiment, the animation presentation includes an initialization phase and a rendering presentation phase.
As shown in fig. 4A, the initialization phase of the animation presentation includes:
S401A, initializing an animation display engine by a browser end;
S402A, setting the playing rate of the target canvas.
S403A, loading SVGA animation from the appointed source SVGA file.
S404A, the SVGA animation is silently played in a special canvas for SVGA recording, and the SVGA animation is recorded at a preset recording rate.
The preset recording speed is the same as the playing speed of the target canvas.
S405A, buffering the recorded SVGA animation sequence frames for subsequent use.
As shown in fig. 4B, the rendering presentation phase of the animation presentation includes:
S401B, acquiring a sequence frame of SVGA animation.
S402B, configuring transformation parameters of animation elements corresponding to each sequence frame.
The transformation parameters include at least one of an animation scaling parameter, a rotation parameter, a movement track, an animation start position, an animation end position, an animation movement speed, a movement direction, and a preset presentation duration.
S403B, calculating and obtaining the pose corresponding to the animation element according to the transformation parameters.
And S404B, rendering four vertexes of the animation element onto a prerendering canvas through a webGL interface according to the pose.
And S405B, rendering texture images corresponding to the animation elements onto a prerendering canvas through a webGL interface.
And rendering texture images of animation elements of the current frame on the same geometrical plane through a texture unit mechanism of webGL.
And S406B, if the target canvas exists, rendering the pre-rendering animation on the pre-rendering canvas onto the target canvas.
In a scene where a target canvas exists, the pre-rendering canvas is an invisible canvas, the target canvas is a visible canvas, and the pre-rendering canvas and the target canvas are the same in size.
And S407B, if the target canvas does not exist, directly displaying the pre-rendering animation on the pre-rendering canvas.
In a scenario where there is no target canvas, the pre-rendering canvas is a visible canvas.
And S408B, rendering the canvas animation of the frame is completed, and rendering of the canvas animation of the next frame is performed.
The above examples are only used to assist in explaining the technical solutions of the present disclosure, and the illustrations and specific procedures related thereto do not constitute limitations on the usage scenarios of the technical solutions of the present disclosure.
Related embodiments of the animation exhibiting device are described in detail below.
Fig. 5 is a schematic structural diagram of an animation display device according to an embodiment of the present application, where the animation display device may be implemented in an animation display engine, such as an animation display engine in a browser.
Specifically, as shown in fig. 5, the animation display device 200 includes a sequence frame acquisition module 210, an animation pose calculation module 220, an animation pre-rendering module 230, and an animation display module;
The sequence frame acquisition module 210 is configured to acquire a sequence frame of the SVGA animation;
The animation pose calculation module 220 is configured to configure transformation parameters of animation elements corresponding to each sequence frame, and calculate pose corresponding to the animation elements according to the transformation parameters;
The animation pre-rendering module 230 is configured to render the animation elements into corresponding pre-rendering canvases through a drawing interface according to the pose, so as to obtain pre-rendering animations;
and the animation demonstration module 240 is used for rendering the pre-rendering animation in each pre-rendering canvas into the target canvas for demonstration.
The animation display device provided by the embodiment realizes that extra transformation animation is set when SVGA animation is played, and renders a plurality of SVGA animation needing to be played simultaneously to the same target canvas for display, thereby being beneficial to enriching the display form of the animation, reducing the rendering of the target canvas and reducing the consumption of CPU (Central processing Unit) when the animation is rendered.
In one possible implementation manner, the animation display device 200 further comprises an animation recording module, a display module and a display module, wherein the animation recording module comprises a silence playing unit and an animation recording unit;
The system comprises a silence playing unit, an animation recording unit and an image data interface, wherein the silence playing unit is used for loading SVGA animation and playing the silence, and the animation recording unit is used for calling the image data interface to record the SVGA animation at a preset recording rate in the process of playing the SVGA animation in a silence mode so as to obtain a sequence frame of the SVGA animation.
In one possible implementation, the animation pose calculation module 220 includes an animation transformation unit and a pose determination unit;
the animation transformation unit is used for performing animation transformation on the animation elements according to the transformation parameters, wherein the animation transformation comprises at least one of animation scaling, animation rotation and animation movement;
and the pose determining unit is used for determining the pose of the animation element according to the animation transformation result of the animation element.
In one possible implementation, the animation pre-rendering module 230 includes a rendering region determining unit and a pre-rendering animation obtaining unit;
the rendering area determining unit is used for determining a rendering area of the animation element on the pre-rendering canvas according to the pose;
And the pre-rendering animation obtaining unit is used for rendering at least two animation elements to a rendering area corresponding to the same pre-rendering canvas through a drawing interface to obtain the pre-rendering animation.
In one possible implementation manner, the rendering region determining unit comprises a rectangle determining subunit and a rendering region determining subunit;
The rendering area determining subunit is used for determining the minimum bounding rectangle as the rendering area of the animation element on a pre-rendering canvas.
In one possible implementation manner, the pre-rendering animation obtaining unit comprises a vertex rendering subunit, a texture image rendering subunit and a pre-rendering animation generating subunit;
The device comprises a vertex rendering subunit, a texture image rendering subunit and a pre-rendering animation generation subunit, wherein the vertex rendering subunit is used for calling a webGL interface, four vertices corresponding to animation elements are rendered on a rendering area corresponding to a pre-rendering canvas through the webGL interface, the texture image rendering subunit is used for rendering texture images of the animation elements in an area surrounded by connecting lines of the four vertices after the four vertices are rendered, and the pre-rendering animation generation subunit is used for drawing the texture images corresponding to at least two animation elements onto the same pre-rendering canvas to obtain pre-rendering animation.
In one possible implementation, the animation display module 240 includes an animation rendering unit and an animation display unit;
The animation rendering unit is used for calling an image drawing interface of a target canvas and rendering the pre-rendering animation rendered into the pre-rendering canvas onto a corresponding target canvas, and the animation display unit is used for displaying the animation through the target canvas.
The animation display device of this embodiment may execute the animation display method according to the foregoing embodiment of the present application, and its implementation principle is similar, and will not be described herein.
The embodiment of the application provides electronic equipment which comprises a memory and a processor, and at least one program stored in the memory and used for being executed by the processor, wherein compared with the prior art, the electronic equipment can realize that extra transformation animation is set when SVGA animation is played, and a plurality of SVGA animations needing to be played simultaneously are rendered to the same target canvas for display, thereby being beneficial to enriching the display form of the animations, reducing the rendering of the target canvas and reducing the consumption of a CPU (Central processing Unit) when the animations are rendered.
In an alternative embodiment, an electronic device is provided, as shown in FIG. 6, the electronic device 4000 shown in FIG. 6 comprising a processor 4001 and a memory 4003. Wherein the processor 4001 is coupled to the memory 4003, such as via a bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 4002 may include a path to transfer information between the aforementioned components. Bus 4002 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 is used for storing application program codes for executing the inventive arrangements, and is controlled to be executed by the processor 4001. The processor 4001 is configured to execute application program codes stored in the memory 4003 to realize what is shown in the foregoing method embodiment.
Among them, the electronic devices include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
Embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above. Compared with the prior art, the embodiment of the application realizes that additional transformation animations are set when the SVGA animations are played, and a plurality of SVGA animations needing to be played simultaneously are rendered to the same target canvas for display, thereby being beneficial to enriching the display form of the animations, reducing the rendering of the target canvas and reducing the consumption of CPU (Central processing Unit) when the animations are rendered.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A computer device, such as a processor of an electronic device, reads the computer instructions from a computer-readable storage medium, and the processor executes the computer instructions such that the computer device, when executed, performs the following:
obtaining a sequence frame of SVGA animation;
Configuring transformation parameters of animation elements corresponding to each sequence frame, and calculating to obtain the pose corresponding to the animation elements according to the transformation parameters;
Respectively rendering the animation elements into corresponding pre-rendering canvases through a drawing interface according to the poses to obtain pre-rendering animations;
rendering the pre-rendering animation in each pre-rendering canvas into a target canvas for display.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module is not limited to the module itself in some cases, and for example, the sequence frame acquisition module may also be described as a "module that acquires a sequence frame".
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.