Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 3 is a flowchart of a shadow rendering method according to an embodiment of the present invention, where the method may be applied to an electronic device. As shown in fig. 3, the method comprises the steps of:
301. the method comprises the steps of obtaining a first face model to be subjected to light and shadow effect rendering in a real-time animation scene in a game.
302. And adjusting the vertex normal in the first face model.
303. Generating a normal map based on the adjusted first face model.
304. And rendering a light and shadow effect based on the normal map and the second face model to obtain the light and shadow effect of the second face model in a real-time animation scene in the game, wherein the second face model is obtained by adjusting the vertex normal of the face model in the cut scene animation.
In practice, the animation in the game may include real-time animation. Real-time animation is animation used during game exploration or combat that a user can manipulate play. The animations in the game may also include cutscenes. The scene-crossing animation is animation related to scene characters or scenarios in the game, plays a role in connecting the scenarios in the game process, and can play a sublimation role in detailed description and scenarios of the game. In order to make the animation more vivid and improve the experience of the user, a certain light is generally simulated to illuminate the virtual character, so that a shadow is generated at a proper position on the virtual character, and a predicted shadow effect is obtained through the change of the brightness of different positions of the virtual character.
It can be understood that the real-time animation includes a model of a virtual character, the model can be divided into a face part and a body part, and the embodiment of the invention mainly aims at a rendering scheme provided by the face part in the real-time animation.
In the process of rendering real-time animation in a game, a first face model can be obtained, a vertex normal in the first face model is adjusted, a normal map is generated based on the adjusted first face model, and light and shadow effect rendering is performed based on the normal map and a second face model, so that the light and shadow effect of the second face model in the real-time animation scene in the game is obtained.
It should be noted that the face model is formed by a plurality of meshes, and each mesh includes a predetermined number of vertices, for example, a mesh may include 3 vertices (triangular faces) or 4 vertices (four-sided faces). For each vertex, a corresponding normal is provided, the normal having a certain direction or angle.
It is noted that the surface of the face model may be considered as an uneven surface, with a normal line at each vertex of the uneven surface. If the light source is arranged at a specific position, the surface of the human face model with lower detail degree can generate accurate illumination direction and reflection effect with higher detail degree according to the normal.
Optionally, the above process of adjusting the vertex normal in the first face model may be implemented as: acquiring a key animation frame; determining a plurality of face wirings in the first face model based on the key animation frame, wherein each face wiring in the first face model is formed by connecting a plurality of vertexes; and adjusting the normal degree corresponding to each vertex on different face wirings in the first face model.
As shown in fig. 4, the unadjusted first face model can be seen in fig. 4, and the first face model in the real-time animation after the expected adjusted face wiring can be seen in fig. 5.
First, the key animation frames in the real-time animation can be manually drawn through drawing software in a mode of original picture setting. As shown in fig. 6-10, the images are the light and shadow effects of the face representation of a virtual character under different lighting conditions, and can be used as key frame animations. Next, a plurality of face routes in the first face model may be determined based on the key animation frames. In practical application, the face wiring in the first face model can be determined in a manual labeling mode. As shown in fig. 11-15, they are schematic diagrams of face wiring in the first face model, which correspond to the keyframe animations shown in fig. 6-10 in turn. The wiring results of all faces in the first face model of the final real-time animation can be shown in fig. 5, and the wiring of the faces in the first face model after the face wiring is adjusted and the face wiring in the first face model which is not adjusted needs to be approximately consistent.
It should be noted that the expected shadow effect of the real-time animation mainly includes the following parts: a nose; the buccal side; cheek front; lambertian light; the forehead; human middle, lips, and chin.
After obtaining face wirings of a first face model in the real-time animation, vertex normals on the face wirings in the first face model can be adjusted. It should be noted that, in order to reduce the amount of work for adjusting the vertex normal, the right half face model may be adjusted first, and then the right half face model is copied to the left half face model, so that the whole effect of the first face model in the real-time animation can be obtained by only adjusting the half face model.
Based on this, optionally, the first face model includes a right half face model and a left half face model, and the process of adjusting the vertex normal in the first face model may be implemented as: acquiring a key animation frame; determining a plurality of face wirings in the right-half face model based on the key animation frame, wherein each face wiring in the right-half face model is formed by connecting a plurality of vertexes together; adjusting the normal degree corresponding to each vertex on different face wirings in the right half face model; and determining the adjusted first face model based on the adjusted right half face model.
The method for adjusting the normal degree corresponding to each vertex on different face wirings in the right-side face model may be: identifying a third vertex set positioned on the preset type of face wiring in the first face model; determining a fourth vertex set on other face wirings except for the face wirings in the preset type in the first face model; determining a third degree corresponding to each vertex in a third vertex set on the preset type of face wiring based on the corresponding relation between the vertex on the preset face wiring and the normal degree; calculating fourth degrees corresponding to all vertexes in a fourth vertex set positioned on other face wirings on the basis of third degrees corresponding to all vertexes in the third vertex set; and adjusting the normal degree of each vertex in the third vertex set to be a corresponding third degree, and adjusting the normal degree of each vertex in the fourth vertex set to be a corresponding fourth degree.
Optionally, the process of calculating, based on the third degrees corresponding to the vertices in the third vertex set, fourth degrees corresponding to the vertices in a fourth vertex set on the other face wirings may be implemented as: and calculating fourth degrees respectively corresponding to all vertexes in a fourth vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the third degrees respectively corresponding to all vertexes in the third vertex set.
In an embodiment of the present invention, as shown in fig. 16, the normal angle is an angle in a plane formed by the x-axis and the y-axis, wherein the default y-axis is 0 ° and the x-axis is 90 °.
The preset type of human face wiring can be set in a relevant mode according to requirements. In some embodiments, the preset type of face wiring may be the side face a line, the cheek and forehead b line, the center c line, the inner circle of the lunbolan f line portion of the first face model in the real-time animation. The normal degree corresponding to each vertex in the third vertex set on the face wirings is a fixed degree, and can be obtained by searching the corresponding relationship between the vertex on the preset face wirings and the normal degree. For example, the normal degree corresponding to the vertex on the side face a line of the first face model in the real-time animation is 85 °, the normal degree corresponding to the vertex on the cheek and forehead b line is 30 °, the normal degree corresponding to the vertex on the center c line is 0 °, and the normal degree corresponding to the vertex on the inner circle of the lunebullan f line portion is 3 °. The nose portion may be adjusted 90 deg. downward with the fixed z-axis.
In some alternative embodiments, the normal angles corresponding to the vertex points on the line d of the nose part of the first face model and the line d of the front turning part of the side face in the real-time animation need to be consistent. In addition, the normal angles corresponding to the vertex points on the nose part e line and the front side turning part e line of the first face model in the real-time animation need to be consistent.
In some alternative embodiments, as shown in fig. 17, the normal degree corresponding to the vertex on the side face a line of the first face model in the real-time animation is 85 °, the normal degrees corresponding to the vertices on the face wiring adjacent to the a line are sequentially decreased by 5 °, the transition is gradually made to the position of the front side face turning part e line, the normal degree corresponding to the vertex on the front side face turning part e line is 65 °, and the normal degree corresponding to the vertex on the front side face turning part d line adjacent to the right side of the front side face turning part e line is 70 °. Based on this, the vertex on the line of the nose portion d corresponds to a normal number of 70 °, and the vertex on the line of the nose portion e corresponds to a normal number of 65 °. The side portions of the nose and the shadow of the bottom of the nose need to be connected without disconnection.
In addition, the normal degree corresponding to the vertex on the cheek and forehead b line of the first face model in the real-time animation is 30 °. The normal degree corresponding to each face wiring of the frontal forehead portion of the first face model in the real-time animation can be calculated based on the normal degrees corresponding to the cheek line, the forehead b line and the center c line, and the number of wiring segments between the two lines. The number of wiring segments between the two lines is 7, the line b transits from the cheek line and the forehead line to the c line at the center, and the normal degree of each adjustment is 4.286 degrees. During the rotation, a fixed rotation angle of 4.286 ° may be set.
For the forehead turning part, the normal degree corresponding to the e line is 65 °, the normal degree corresponding to the b line is 30 °, the number of the wiring segments between the two lines is 2, and the normal degree of each adjustment is 17.5 °, that is, the normal degree corresponding to the wiring of other faces between the two lines is 65 ° -17.5 ° -47.5 °. For the cheek break, the normal degree for the e-line is 65 °, the normal degree for the b-line is 30 °, the number of wiring segments between the two lines is 4, and the normal degree for each adjustment is 8.75 °.
For the Robertian light of the first human face model in the real-time animation, the normal degree corresponding to the point formed by the middle convergence is 3 degrees, and the transition to the b-line position is 5 wiring segment numbers in total, so that the normal degree adjusted each time is 5.4 degrees. For this reason, the degree of normal of the outermost ring of lunebron light is 30 ° -5.4 ° -24.6 °.
In the real-time animation, the normal degree corresponding to the e line of the nose part of the first face model is 65 degrees, the normal degree corresponding to the outermost circle of the Roebullan light is 24.6 degrees, the number of wiring sections between the two lines is 4, and the normal degree of each adjustment is 10.1 degrees.
Based on the mode, two face wirings with known normal degrees are positioned, then the number of wiring sections between two lines is determined, so that the progressive degree can be calculated by dividing the difference value of the normal degrees corresponding to the two lines by the number of wiring sections, and then the normal degrees corresponding to other face wirings are calculated by other face wirings between the two lines according to the number of the wiring sections between the other face wirings and one line in the two lines.
For the nasal floor region of the first face model in real-time animation, the original normal degree is 0 °, and the direction below the normal face can now be rotated by 90 °. In this way, it is expected that the shadow of the nasally lower region does not have the effect of a diamond shape, but has the effect of a flat upper edge.
In addition, the normal degree of the white region of the first face model in the real-time animation is fixed to be 30 degrees. The normal degrees corresponding to the mouth angle axis need to be set to be all consistent.
By the method, normal degrees corresponding to vertexes on all face wiring in the right-side face model can be calculated, and after the normal degrees corresponding to the vertexes on all face wiring in the right-side face model are determined, the adjusted first face model in the real-time animation can be determined based on the adjusted right-side face model.
Optionally, the process of determining the adjusted first face model based on the adjusted right half face model may be implemented as follows: copying the adjusted right half human face model; the normal direction corresponding to each vertex in the copied face model is turned; and determining the adjusted first face model based on the adjusted right half face model and the turned face model.
In practical application, the unadjusted left half-side face model in the first face model in the real-time animation can be deleted first, and then the adjusted right half-side face model is copied to the deleted left half-side face model in a mirror image manner. Because the normal direction corresponding to each vertex in the copied face model is opposite, the normal direction corresponding to each vertex in the copied face model can be inverted.
Optionally, after flipping, the flipped face model may replace the left half face model in the first face model; combining the adjusted right half-side face model and the face model replacing the left half-side face model; and welding vertexes, which meet the preset distance condition, in the combined face model to obtain the adjusted first face model.
It should be noted that after welding, the normal lines are broken, that is, the normal lines corresponding to the partial vertices in the first human face model are two, one of the normal lines is the original normal line, and the other is the adjusted normal line, and the two normal lines need to be merged into one.
Next, a normal map may be generated based on the adjusted first face model in the real-time animation. Alternatively, the process of generating the normal map based on the adjusted first face model may be implemented as: acquiring normal information of the adjusted first face model; the normal information is baked into the second face model to generate a normal map. An example of the resulting normal map is shown in fig. 18.
After the normal map is obtained, shading effect rendering can be performed based on the normal map and the second face model, so that the shading effect of the second face model in a real-time animation scene in a game is obtained. In the process, a second face model is needed, and the second face model is actually a face model obtained by adjusting a vertex normal in the initial second face model. The second face model may be used for shadow effect rendering of the cut scene.
In adjusting the initial second face model, as shown in fig. 19, the normal angle is an angle in a plane formed by the x-axis and y-axis, where the default y-axis is 0 ° and the x-axis is 90 °.
Optionally, the above process of adjusting the vertex normal in the initial second face model may be implemented as: identifying a first vertex set which is positioned on face wiring of a preset type in an initial second face model; determining a second vertex set on other face wirings except for the face wirings in the preset type in the initial second face model; determining first degrees corresponding to all vertexes in a first vertex set on preset type face wiring based on the corresponding relation between the vertexes on the preset type face wiring and the normal degrees; calculating second degrees corresponding to all vertexes in a second vertex set on the other face wiring based on first degrees corresponding to all vertexes in the first vertex set; the normal degree of each vertex in the first vertex set is adjusted to be a corresponding first degree, and the normal degree of each vertex in the second vertex set is adjusted to be a corresponding second degree.
The preset type of human face wiring can be set in a relevant mode according to requirements. In some embodiments, as shown in fig. 20, the preset type of face wiring may be the inner circles of the side face a line, the cheek and forehead b line, the center c line, and the lunbolan f line. The normal degree corresponding to each vertex in the first vertex set on the face wirings is a fixed degree, and can be obtained by searching the corresponding relationship between the vertex on the preset face wirings and the normal degree. For example, the normal degree corresponding to the vertex on the line a of the side face is 85 °, the normal degree corresponding to the vertex on the line b of the cheek and the forehead is 30 °, the normal degree corresponding to the vertex on the line c of the center is 0 °, and the normal degree corresponding to the vertex on the inner circle of the lambertian f-line portion is 3 °. As shown in fig. 21, the normal degree corresponding to the eye and the vertex at the white position may be adjusted to 0 °.
After determining the first degree corresponding to each vertex in the first vertex set, the second degree corresponding to each vertex in the second vertex set may be calculated based on the first degree corresponding to each vertex in the first vertex set. The second face model comprises a plurality of face wirings, and the second vertex set is a vertex on the face wirings except the preset type of face wirings in the face wirings. After the first degree and the second degree are determined, the normal degree of each vertex in the first vertex set may be adjusted to a corresponding first degree, and the normal degree of each vertex in the second vertex set may be adjusted to a corresponding second degree.
For example, the normal degree corresponding to the vertex on the line a of the side face is 85 °, the normal degree corresponding to the wiring of the side face is gradually decreased by 5 ° from the line a of the side face toward the eye angle direction, the normal degree corresponding to the first face wiring of the side face a line toward the eye angle direction is 80 °, and so on, the normal degree corresponding to the third face wiring is 70 °, and the normal degree corresponding to the fourth face wiring next to the eye angle is 65 °.
The third face layout line may also be referred to as a front side face turning part d line, and in some alternative embodiments, the normal angles corresponding to the vertex points on the nose part d line and the front side face turning part d line need to be consistent. The fourth face layout next to the canthus may also be called the front-side turning part e line, and in some alternative embodiments, the normal angles corresponding to the vertex points on the nose part e line and the front-side turning part e line need to be consistent. Based on this, after determining the normal angle corresponding to the vertex on the front side turning part d line and the front side turning part e line, the determined normal angle may be copied to the nasal side triangular region, i.e., the normal angle corresponding to the nasal part d line is adjusted to 70 °, and the normal angle corresponding to the nasal part e line is adjusted to 65 °.
Optionally, the process of calculating, based on the first degrees corresponding to the vertices in the first vertex set, the second degrees corresponding to the vertices in the second vertex set on the other face wirings may be implemented as: and calculating second degrees respectively corresponding to all vertexes in a second vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the first degrees respectively corresponding to all vertexes in the first vertex set.
The position relationship between the other face wirings and the preset type face wirings may be the number of wiring sections between the other face wirings and the preset type face wirings. Based on the number of wiring sections between the other face wiring and the preset type of face wiring and the first degree corresponding to each vertex in the first vertex set, the second degree corresponding to each vertex in the second vertex set on the other face wiring can be calculated.
In practical application, the number of wiring sections from one face wiring to another face wiring in two preset types of face wirings can be determined, and a plurality of other face wirings can be clamped between the two preset types of face wirings. Knowing the normal degrees corresponding to the two preset types of face wirings, the normal degree difference between the two preset types of face wirings can be calculated. And calculating a progressive gradient according to the normal degree difference and the number of the wiring sections. After the progressive degree is obtained, the normal degree corresponding to any face wiring can be determined according to the number of wiring sections between any face wiring in other face wirings and the face wiring with the maximum normal degree in two preset types of face wirings and the progressive degree. Wherein, the formula for calculating the progressive gradient is as follows:
progressive degree-normal degree difference-division-wiring section number
For example, as shown in fig. 22, the vertex on the mid-center c line corresponds to a normal of 0 ° and the vertices on the cheek and forehead b lines correspond to a normal of 30 °. There are 7 wiring segments from the c-line to the b-line, and thus the degree of progression is (30 ° -0 °) ÷ 7 ═ 4.286 °. The normal degree corresponding to the first face wiring adjacent to the line d is 25.714 ° from 30 ° to 4.286 °, 21.428 ° to … … ° from 25.714 ° to 4.286 °, and so on from the line d, and the normal degrees corresponding to the other face wirings sandwiched between the line c and the line b can be calculated.
For another example, if the normal degree corresponding to the e-line is 65 °, the normal degree corresponding to the b-line is 30 °, and there are 2 wiring segments from the e-line to the b-line, the normal degree corresponding to the face wiring is 65 ° - (65 ° -30 °)/÷ 2 ═ 47.5 °.
In addition, for the head in the second face model, a sphere may be created, and the normal degree corresponding to each vertex in the sphere is preset. As shown in fig. 23-24, the corresponding normal degrees of the sphere are conducted to the head. After conduction, the user can manually adjust the position of the z-axis so that the normal to the head and the normal to the forehead seamlessly join, as shown in fig. 25.
As shown in fig. 26 to 27, in order to produce better light and shadow effect, the face wirings of the eyebrow arch and the upper eyelid portion in the second face model need to be designed according to a strict rule, and the normal angles corresponding to the face wirings of the eyebrow arch and the upper eyelid portion need to be kept consistent. It should be noted that the shape of the shadow is directly hooked with the wiring of the human face, and the normal angle and the light and shadow effect are directly determined by the shape of the wiring of the human face.
The method can meet the requirements of clean and subjectively controllable light shadow of the shadow edge of the super-large close-up, can ensure that the real-time animation battle of the game has natural transition, and meets the light shadow effect of the xylonite animation.
The human face model under the real-time animation scene rendered by the method is suitable for the game battle and world exploration which run in full frames, and the game of 60 frames + does not generate the sense of incongruity. The invention can lead the light and shadow to stay for a long time in the light and shadow part which does not need the intermediate frame part of the human face model, and has transition and can not stay in the unattractive light and shadow part. Therefore, a first face model is generated, and the vertex normal corresponding to the first face model is adjusted to obtain the first face model which can generate natural transition and is in line with the xylonite animation effect and used for the real-time animation scene.
Since the light and shadow transition is routed according to the face wiring, the adjusted face wiring of the first face model is not suitable for animation production, especially animation with exaggerated expressions. Therefore, the adjusted normal information of the first face model needs to be baked on the second face model to form a normal map.
The method provided by the embodiment of the invention is also suitable for multi-detail levels (Leve l s of Deta i l, LOD) with more specifications. Therefore, in the battle and world exploration of real-time animation in games, the shadows of the human face model can be naturally transited and accord with the animation light and shadow effect. Even in the case of super close-up, the addition of the soft-edge processing can alleviate the phenomenon that the shadow edge is jagged in small parts.
By adopting the method and the device, the user can have experience feeling closer to the two-dimensional animation, better immersive experience can be realized, and the user can feel the charm of the original animation and the two-dimensional animation more easily. The quality of real-time animation in the game is improved from the picture, and the game can be longer in life and more vital.
By adopting the method and the device, under a real-time animation scene in a game, the normal of the vertex in the first face model can be adjusted, and then the normal information corresponding to the adjusted first face model is baked into the second face model to obtain the normal map. And finally, rendering processing is carried out based on the normal map and the second face model so as to obtain the light and shadow effect meeting the expectation. By adopting the method provided by the invention, the shadow effect can be avoided by reducing the shadow of the human face model or locking the light source. Furthermore, the problem that the human face model in the prior art is hardly influenced by physical illumination can be avoided by adopting the method and the device, so that the mismatching feeling of the human face model and the body part model caused by different illumination can be reduced by adopting the method and the device, and the texture of the animation picture is improved.
The shadow rendering apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these shadow rendering devices can each be configured using commercially available hardware components through the steps taught in this scheme.
Fig. 28 is a schematic structural diagram of a light and shadow rendering apparatus according to an embodiment of the present invention, and as shown in fig. 28, the apparatus includes:
an obtaining module 281, configured to obtain a first face model to be subjected to light and shadow effect rendering in a real-time animation scene in a game;
an adjusting module 282, configured to adjust a vertex normal in the first face model;
a generating module 283, configured to generate a normal map based on the adjusted first face model;
and a rendering module 284, configured to perform light and shadow effect rendering based on the normal map and a second face model to obtain a light and shadow effect of the second face model in the real-time animation scene in the game, where the second face model is obtained by adjusting a vertex normal of the face model in the cut scene animation.
Optionally, the adjusting module 282 is configured to:
acquiring a key animation frame;
determining a plurality of face wirings in the first face model based on the key animation frame, wherein each face wiring in the first face model is formed by connecting a plurality of vertexes together;
and adjusting the normal degree corresponding to each vertex on different face wirings in the first face model.
Optionally, the first face model comprises a left half face model and a right half face model, and the adjusting module 282 is configured to:
acquiring a key animation frame;
determining a plurality of face wirings in the right-half face model based on the key animation frame, wherein each face wiring in the right-half face model is formed by connecting a plurality of vertexes together;
adjusting the normal degree corresponding to each vertex on different face wirings in the right half face model;
and determining the adjusted first face model based on the adjusted right half face model.
Optionally, the adjusting module 282 is configured to:
copying the adjusted right half human face model;
the normal direction corresponding to each vertex in the copied face model is turned;
and determining the adjusted first face model based on the adjusted right half face model and the turned face model.
Optionally, the adjusting module 282 is configured to:
replacing the left half side face model in the first face model with the inverted face model;
combining the adjusted right half-side face model and the face model replacing the left half-side face model;
and welding vertexes, which meet the preset distance condition, in the combined face model to obtain the adjusted first face model.
Optionally, the generating module 283 is configured to:
acquiring normal information of the adjusted first face model;
baking the normal information into the second face model to generate a normal map.
Optionally, the adjusting module 282 is further configured to:
acquiring an initial second face model;
and adjusting the vertex normal in the initial second face model to obtain the second face model.
Optionally, the adjusting module 282 is configured to:
identifying a first vertex set positioned on a preset type of face wiring in the initial second face model;
determining a second vertex set on other face wirings except for the face wirings in the preset type in the initial second face model;
determining a first degree corresponding to each vertex in the first vertex set on the preset type of face wiring based on the corresponding relation between the vertex on the preset type of face wiring and the normal degree;
calculating second degrees corresponding to all vertexes in a second vertex set on the other face wiring based on first degrees corresponding to all vertexes in the first vertex set;
and adjusting the normal degree of each vertex in the first vertex set to be a corresponding first degree, and adjusting the normal degree of each vertex in the second vertex set to be a corresponding second degree.
Optionally, the adjusting module 282 is configured to:
and calculating second degrees respectively corresponding to all vertexes in a second vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the first degrees respectively corresponding to all vertexes in the first vertex set.
The apparatus shown in fig. 28 may perform the light and shadow rendering method provided in the embodiments shown in fig. 1 to fig. 27, and the detailed implementation process and technical effects are described in the embodiments, and are not repeated herein.
In one possible design, the structure of the shadow rendering apparatus shown in fig. 28 may be implemented as an electronic device, as shown in fig. 29, which may include: a processor 91, and a memory 92. Wherein the memory 92 has stored thereon executable code, which when executed by the processor 91, makes the processor 91 at least implement the shadow rendering method as provided in the foregoing embodiments shown in fig. 1 to 27.
Optionally, the electronic device may further include a communication interface 93 for communicating with other devices.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the light and shadow rendering method provided in the foregoing embodiments shown in fig. 1 to 27.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The light and shadow rendering method provided in the embodiment of the present invention may be executed by a certain program/software, the program/software may be provided by a network side, the electronic device mentioned in the foregoing embodiment may download the program/software into a local nonvolatile storage medium, and when it needs to execute the light and shadow rendering method, the program/software is read into a memory by a CPU, and then the CPU executes the program/software to implement the light and shadow rendering method provided in the foregoing embodiment, and an execution process may refer to the schematic in fig. 1 to fig. 27.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.