[go: up one dir, main page]

CN114119851A - Light and shadow rendering method, device, device and storage medium - Google Patents

Light and shadow rendering method, device, device and storage medium Download PDF

Info

Publication number
CN114119851A
CN114119851A CN202111486602.9A CN202111486602A CN114119851A CN 114119851 A CN114119851 A CN 114119851A CN 202111486602 A CN202111486602 A CN 202111486602A CN 114119851 A CN114119851 A CN 114119851A
Authority
CN
China
Prior art keywords
face model
face
vertex
normal
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111486602.9A
Other languages
Chinese (zh)
Inventor
张睦翊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Perfect Time And Space Software Co ltd
Original Assignee
Shanghai Perfect Time And Space Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Perfect Time And Space Software Co ltd filed Critical Shanghai Perfect Time And Space Software Co ltd
Priority to CN202111486602.9A priority Critical patent/CN114119851A/en
Publication of CN114119851A publication Critical patent/CN114119851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明实施例提供一种光影渲染方法、装置、设备和存储介质,该方法包括:获取游戏中实时动画场景下待进行光影效果渲染的第一人脸模型;对第一人脸模型中的顶点法线进行调整;基于调整后的第一人脸模型生成法线贴图;基于法线贴图和第二人脸模型进行光影效果渲染,以得到第二人脸模型在游戏中实时动画场景下的光影效果,其中,第二人脸模型是对过场动画中的人脸模型的顶点法线进行调整后得到的。在游戏中实时动画场景下,通过对第一人脸模型中的顶点法线进行调整,然后将调整后的第一人脸模型对应的法线信息烘焙到第二人脸模型中,以获得法线贴图。最后基于法线贴图和第二人脸模型进行渲染处理,以获得满足预期的光影效果,提高了动画画面质感。

Figure 202111486602

Embodiments of the present invention provide a light and shadow rendering method, device, device, and storage medium. The method includes: acquiring a first face model to be rendered with light and shadow effects in a real-time animation scene in a game; Adjust the normal; generate a normal map based on the adjusted first face model; perform light and shadow effect rendering based on the normal map and the second face model to obtain the light and shadow of the second face model in the real-time animation scene in the game effect, wherein the second face model is obtained by adjusting the vertex normals of the face model in the cutscene. In the real-time animation scene in the game, by adjusting the vertex normals in the first face model, and then baking the normal information corresponding to the adjusted first face model into the second face model to obtain the method Line map. Finally, the rendering process is performed based on the normal map and the second face model to obtain the expected light and shadow effect and improve the texture of the animation picture.

Figure 202111486602

Description

Shadow rendering method, device, equipment and storage medium
Technical Field
The present invention relates to the field of animation rendering technologies, and in particular, to a method, an apparatus, a device, and a storage medium for shadow rendering.
Background
The cartoon rendering is to make the light and shadow transition of the virtual character directly pass through a color gradation mode so as to achieve the light and shadow effect of the xylonite animation. The representation of the celluloid animation is very subjective, and in order to present a beautiful light and shadow effect, the illumination of the virtual character in the celluloid animation does not completely conform to the real physical illumination.
Under the condition of completely conforming to the real physical illumination, that is, under the condition that the human face model corresponding to the virtual character does not carry out special treatment, the light shadow at the position close to the oral cavity in the human face model can not conform to the light shadow effect of the xylonite animation, and can become unattractive due to the color gradation of the light shadow transition, thereby influencing the texture of the animation picture. As shown in fig. 1-2, the left diagram in fig. 1-2 is a shadow effect obtained by rendering without special processing of the face model corresponding to the virtual character, and the right diagram in fig. 1-2 is a shadow effect of an ideal xylonite animation.
Solutions to the above problems are proposed in the related art, and mainly the problem of the light and shadow effect of the face model is solved by a solution of reducing the shadow of the face model and a solution of locking the illumination light source. However, the light and shadow effect obtained by adopting the schemes is that the human face model is hardly influenced by physical illumination any more, so that strong sense of incongruity is generated with the light and shadow effect of the body part model, and the texture of the animation picture is still not improved.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for rendering shadows, which are used for improving the texture of animation pictures.
In a first aspect, an embodiment of the present invention provides a method for rendering shadows, where the method includes:
acquiring a first face model to be subjected to shadow effect rendering in a real-time animation scene in a game;
adjusting a vertex normal in the first face model;
generating a normal map based on the adjusted first face model;
and rendering a light and shadow effect based on the normal map and a second face model to obtain the light and shadow effect of the second face model in the real-time animation scene in the game, wherein the second face model is obtained by adjusting the vertex normal of the face model in the cut scene animation.
In a second aspect, an embodiment of the present invention provides a shadow rendering apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first face model to be subjected to light and shadow effect rendering in a real-time animation scene in a game;
the adjusting module is used for adjusting the vertex normal in the first face model;
the generating module is used for generating a normal map based on the adjusted first face model;
and the rendering module is used for rendering a light and shadow effect based on the normal map and a second face model so as to obtain the light and shadow effect of the second face model in the real-time animation scene in the game, wherein the second face model is obtained by adjusting the vertex normal of the face model in the cut scene animation.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores executable code thereon, and when the executable code is executed by the processor, the processor is enabled to implement at least the shadow rendering method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the shadow rendering method of the first aspect.
By adopting the method and the device, under a real-time animation scene in a game, the normal of the vertex in the first face model can be adjusted, and then the normal information corresponding to the adjusted first face model is baked into the second face model to obtain the normal map. And finally, rendering processing is carried out based on the normal map and the second face model so as to obtain the light and shadow effect meeting the expectation. By adopting the method provided by the invention, the shadow effect can be avoided by reducing the shadow of the human face model or locking the light source. Furthermore, the problem that the human face model in the prior art is hardly influenced by physical illumination can be avoided by adopting the method and the device, so that the mismatching feeling of the human face model and the body part model caused by different illumination can be reduced by adopting the method and the device, and the texture of the animation picture is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIGS. 1-2 are schematic diagrams illustrating comparison between results and expectations of shadow rendering according to embodiments of the present invention;
fig. 3 is a schematic flowchart of a shadow rendering method according to an embodiment of the present invention;
fig. 4-5 are schematic diagrams of a first face model according to an embodiment of the present invention;
6-10 are schematic diagrams of a key animation frame according to an embodiment of the present invention;
fig. 11-15 are schematic diagrams of face wirings corresponding to key animation frames according to an embodiment of the present invention;
FIGS. 16-17 are schematic diagrams illustrating adjustment of a first face model according to an embodiment of the present invention;
FIG. 18 is a diagram illustrating a normal map according to an embodiment of the present invention;
FIGS. 19-22 are schematic diagrams illustrating adjustment of a second face model according to an embodiment of the present invention;
fig. 23-25 are schematic views of an adjustment of a scalp portion according to an embodiment of the present invention;
FIGS. 26-27 are schematic diagrams illustrating an adjustment of a second face model according to an embodiment of the present invention;
fig. 28 is a schematic structural diagram of a light and shadow rendering apparatus according to an embodiment of the present invention;
fig. 29 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 3 is a flowchart of a shadow rendering method according to an embodiment of the present invention, where the method may be applied to an electronic device. As shown in fig. 3, the method comprises the steps of:
301. the method comprises the steps of obtaining a first face model to be subjected to light and shadow effect rendering in a real-time animation scene in a game.
302. And adjusting the vertex normal in the first face model.
303. Generating a normal map based on the adjusted first face model.
304. And rendering a light and shadow effect based on the normal map and the second face model to obtain the light and shadow effect of the second face model in a real-time animation scene in the game, wherein the second face model is obtained by adjusting the vertex normal of the face model in the cut scene animation.
In practice, the animation in the game may include real-time animation. Real-time animation is animation used during game exploration or combat that a user can manipulate play. The animations in the game may also include cutscenes. The scene-crossing animation is animation related to scene characters or scenarios in the game, plays a role in connecting the scenarios in the game process, and can play a sublimation role in detailed description and scenarios of the game. In order to make the animation more vivid and improve the experience of the user, a certain light is generally simulated to illuminate the virtual character, so that a shadow is generated at a proper position on the virtual character, and a predicted shadow effect is obtained through the change of the brightness of different positions of the virtual character.
It can be understood that the real-time animation includes a model of a virtual character, the model can be divided into a face part and a body part, and the embodiment of the invention mainly aims at a rendering scheme provided by the face part in the real-time animation.
In the process of rendering real-time animation in a game, a first face model can be obtained, a vertex normal in the first face model is adjusted, a normal map is generated based on the adjusted first face model, and light and shadow effect rendering is performed based on the normal map and a second face model, so that the light and shadow effect of the second face model in the real-time animation scene in the game is obtained.
It should be noted that the face model is formed by a plurality of meshes, and each mesh includes a predetermined number of vertices, for example, a mesh may include 3 vertices (triangular faces) or 4 vertices (four-sided faces). For each vertex, a corresponding normal is provided, the normal having a certain direction or angle.
It is noted that the surface of the face model may be considered as an uneven surface, with a normal line at each vertex of the uneven surface. If the light source is arranged at a specific position, the surface of the human face model with lower detail degree can generate accurate illumination direction and reflection effect with higher detail degree according to the normal.
Optionally, the above process of adjusting the vertex normal in the first face model may be implemented as: acquiring a key animation frame; determining a plurality of face wirings in the first face model based on the key animation frame, wherein each face wiring in the first face model is formed by connecting a plurality of vertexes; and adjusting the normal degree corresponding to each vertex on different face wirings in the first face model.
As shown in fig. 4, the unadjusted first face model can be seen in fig. 4, and the first face model in the real-time animation after the expected adjusted face wiring can be seen in fig. 5.
First, the key animation frames in the real-time animation can be manually drawn through drawing software in a mode of original picture setting. As shown in fig. 6-10, the images are the light and shadow effects of the face representation of a virtual character under different lighting conditions, and can be used as key frame animations. Next, a plurality of face routes in the first face model may be determined based on the key animation frames. In practical application, the face wiring in the first face model can be determined in a manual labeling mode. As shown in fig. 11-15, they are schematic diagrams of face wiring in the first face model, which correspond to the keyframe animations shown in fig. 6-10 in turn. The wiring results of all faces in the first face model of the final real-time animation can be shown in fig. 5, and the wiring of the faces in the first face model after the face wiring is adjusted and the face wiring in the first face model which is not adjusted needs to be approximately consistent.
It should be noted that the expected shadow effect of the real-time animation mainly includes the following parts: a nose; the buccal side; cheek front; lambertian light; the forehead; human middle, lips, and chin.
After obtaining face wirings of a first face model in the real-time animation, vertex normals on the face wirings in the first face model can be adjusted. It should be noted that, in order to reduce the amount of work for adjusting the vertex normal, the right half face model may be adjusted first, and then the right half face model is copied to the left half face model, so that the whole effect of the first face model in the real-time animation can be obtained by only adjusting the half face model.
Based on this, optionally, the first face model includes a right half face model and a left half face model, and the process of adjusting the vertex normal in the first face model may be implemented as: acquiring a key animation frame; determining a plurality of face wirings in the right-half face model based on the key animation frame, wherein each face wiring in the right-half face model is formed by connecting a plurality of vertexes together; adjusting the normal degree corresponding to each vertex on different face wirings in the right half face model; and determining the adjusted first face model based on the adjusted right half face model.
The method for adjusting the normal degree corresponding to each vertex on different face wirings in the right-side face model may be: identifying a third vertex set positioned on the preset type of face wiring in the first face model; determining a fourth vertex set on other face wirings except for the face wirings in the preset type in the first face model; determining a third degree corresponding to each vertex in a third vertex set on the preset type of face wiring based on the corresponding relation between the vertex on the preset face wiring and the normal degree; calculating fourth degrees corresponding to all vertexes in a fourth vertex set positioned on other face wirings on the basis of third degrees corresponding to all vertexes in the third vertex set; and adjusting the normal degree of each vertex in the third vertex set to be a corresponding third degree, and adjusting the normal degree of each vertex in the fourth vertex set to be a corresponding fourth degree.
Optionally, the process of calculating, based on the third degrees corresponding to the vertices in the third vertex set, fourth degrees corresponding to the vertices in a fourth vertex set on the other face wirings may be implemented as: and calculating fourth degrees respectively corresponding to all vertexes in a fourth vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the third degrees respectively corresponding to all vertexes in the third vertex set.
In an embodiment of the present invention, as shown in fig. 16, the normal angle is an angle in a plane formed by the x-axis and the y-axis, wherein the default y-axis is 0 ° and the x-axis is 90 °.
The preset type of human face wiring can be set in a relevant mode according to requirements. In some embodiments, the preset type of face wiring may be the side face a line, the cheek and forehead b line, the center c line, the inner circle of the lunbolan f line portion of the first face model in the real-time animation. The normal degree corresponding to each vertex in the third vertex set on the face wirings is a fixed degree, and can be obtained by searching the corresponding relationship between the vertex on the preset face wirings and the normal degree. For example, the normal degree corresponding to the vertex on the side face a line of the first face model in the real-time animation is 85 °, the normal degree corresponding to the vertex on the cheek and forehead b line is 30 °, the normal degree corresponding to the vertex on the center c line is 0 °, and the normal degree corresponding to the vertex on the inner circle of the lunebullan f line portion is 3 °. The nose portion may be adjusted 90 deg. downward with the fixed z-axis.
In some alternative embodiments, the normal angles corresponding to the vertex points on the line d of the nose part of the first face model and the line d of the front turning part of the side face in the real-time animation need to be consistent. In addition, the normal angles corresponding to the vertex points on the nose part e line and the front side turning part e line of the first face model in the real-time animation need to be consistent.
In some alternative embodiments, as shown in fig. 17, the normal degree corresponding to the vertex on the side face a line of the first face model in the real-time animation is 85 °, the normal degrees corresponding to the vertices on the face wiring adjacent to the a line are sequentially decreased by 5 °, the transition is gradually made to the position of the front side face turning part e line, the normal degree corresponding to the vertex on the front side face turning part e line is 65 °, and the normal degree corresponding to the vertex on the front side face turning part d line adjacent to the right side of the front side face turning part e line is 70 °. Based on this, the vertex on the line of the nose portion d corresponds to a normal number of 70 °, and the vertex on the line of the nose portion e corresponds to a normal number of 65 °. The side portions of the nose and the shadow of the bottom of the nose need to be connected without disconnection.
In addition, the normal degree corresponding to the vertex on the cheek and forehead b line of the first face model in the real-time animation is 30 °. The normal degree corresponding to each face wiring of the frontal forehead portion of the first face model in the real-time animation can be calculated based on the normal degrees corresponding to the cheek line, the forehead b line and the center c line, and the number of wiring segments between the two lines. The number of wiring segments between the two lines is 7, the line b transits from the cheek line and the forehead line to the c line at the center, and the normal degree of each adjustment is 4.286 degrees. During the rotation, a fixed rotation angle of 4.286 ° may be set.
For the forehead turning part, the normal degree corresponding to the e line is 65 °, the normal degree corresponding to the b line is 30 °, the number of the wiring segments between the two lines is 2, and the normal degree of each adjustment is 17.5 °, that is, the normal degree corresponding to the wiring of other faces between the two lines is 65 ° -17.5 ° -47.5 °. For the cheek break, the normal degree for the e-line is 65 °, the normal degree for the b-line is 30 °, the number of wiring segments between the two lines is 4, and the normal degree for each adjustment is 8.75 °.
For the Robertian light of the first human face model in the real-time animation, the normal degree corresponding to the point formed by the middle convergence is 3 degrees, and the transition to the b-line position is 5 wiring segment numbers in total, so that the normal degree adjusted each time is 5.4 degrees. For this reason, the degree of normal of the outermost ring of lunebron light is 30 ° -5.4 ° -24.6 °.
In the real-time animation, the normal degree corresponding to the e line of the nose part of the first face model is 65 degrees, the normal degree corresponding to the outermost circle of the Roebullan light is 24.6 degrees, the number of wiring sections between the two lines is 4, and the normal degree of each adjustment is 10.1 degrees.
Based on the mode, two face wirings with known normal degrees are positioned, then the number of wiring sections between two lines is determined, so that the progressive degree can be calculated by dividing the difference value of the normal degrees corresponding to the two lines by the number of wiring sections, and then the normal degrees corresponding to other face wirings are calculated by other face wirings between the two lines according to the number of the wiring sections between the other face wirings and one line in the two lines.
For the nasal floor region of the first face model in real-time animation, the original normal degree is 0 °, and the direction below the normal face can now be rotated by 90 °. In this way, it is expected that the shadow of the nasally lower region does not have the effect of a diamond shape, but has the effect of a flat upper edge.
In addition, the normal degree of the white region of the first face model in the real-time animation is fixed to be 30 degrees. The normal degrees corresponding to the mouth angle axis need to be set to be all consistent.
By the method, normal degrees corresponding to vertexes on all face wiring in the right-side face model can be calculated, and after the normal degrees corresponding to the vertexes on all face wiring in the right-side face model are determined, the adjusted first face model in the real-time animation can be determined based on the adjusted right-side face model.
Optionally, the process of determining the adjusted first face model based on the adjusted right half face model may be implemented as follows: copying the adjusted right half human face model; the normal direction corresponding to each vertex in the copied face model is turned; and determining the adjusted first face model based on the adjusted right half face model and the turned face model.
In practical application, the unadjusted left half-side face model in the first face model in the real-time animation can be deleted first, and then the adjusted right half-side face model is copied to the deleted left half-side face model in a mirror image manner. Because the normal direction corresponding to each vertex in the copied face model is opposite, the normal direction corresponding to each vertex in the copied face model can be inverted.
Optionally, after flipping, the flipped face model may replace the left half face model in the first face model; combining the adjusted right half-side face model and the face model replacing the left half-side face model; and welding vertexes, which meet the preset distance condition, in the combined face model to obtain the adjusted first face model.
It should be noted that after welding, the normal lines are broken, that is, the normal lines corresponding to the partial vertices in the first human face model are two, one of the normal lines is the original normal line, and the other is the adjusted normal line, and the two normal lines need to be merged into one.
Next, a normal map may be generated based on the adjusted first face model in the real-time animation. Alternatively, the process of generating the normal map based on the adjusted first face model may be implemented as: acquiring normal information of the adjusted first face model; the normal information is baked into the second face model to generate a normal map. An example of the resulting normal map is shown in fig. 18.
After the normal map is obtained, shading effect rendering can be performed based on the normal map and the second face model, so that the shading effect of the second face model in a real-time animation scene in a game is obtained. In the process, a second face model is needed, and the second face model is actually a face model obtained by adjusting a vertex normal in the initial second face model. The second face model may be used for shadow effect rendering of the cut scene.
In adjusting the initial second face model, as shown in fig. 19, the normal angle is an angle in a plane formed by the x-axis and y-axis, where the default y-axis is 0 ° and the x-axis is 90 °.
Optionally, the above process of adjusting the vertex normal in the initial second face model may be implemented as: identifying a first vertex set which is positioned on face wiring of a preset type in an initial second face model; determining a second vertex set on other face wirings except for the face wirings in the preset type in the initial second face model; determining first degrees corresponding to all vertexes in a first vertex set on preset type face wiring based on the corresponding relation between the vertexes on the preset type face wiring and the normal degrees; calculating second degrees corresponding to all vertexes in a second vertex set on the other face wiring based on first degrees corresponding to all vertexes in the first vertex set; the normal degree of each vertex in the first vertex set is adjusted to be a corresponding first degree, and the normal degree of each vertex in the second vertex set is adjusted to be a corresponding second degree.
The preset type of human face wiring can be set in a relevant mode according to requirements. In some embodiments, as shown in fig. 20, the preset type of face wiring may be the inner circles of the side face a line, the cheek and forehead b line, the center c line, and the lunbolan f line. The normal degree corresponding to each vertex in the first vertex set on the face wirings is a fixed degree, and can be obtained by searching the corresponding relationship between the vertex on the preset face wirings and the normal degree. For example, the normal degree corresponding to the vertex on the line a of the side face is 85 °, the normal degree corresponding to the vertex on the line b of the cheek and the forehead is 30 °, the normal degree corresponding to the vertex on the line c of the center is 0 °, and the normal degree corresponding to the vertex on the inner circle of the lambertian f-line portion is 3 °. As shown in fig. 21, the normal degree corresponding to the eye and the vertex at the white position may be adjusted to 0 °.
After determining the first degree corresponding to each vertex in the first vertex set, the second degree corresponding to each vertex in the second vertex set may be calculated based on the first degree corresponding to each vertex in the first vertex set. The second face model comprises a plurality of face wirings, and the second vertex set is a vertex on the face wirings except the preset type of face wirings in the face wirings. After the first degree and the second degree are determined, the normal degree of each vertex in the first vertex set may be adjusted to a corresponding first degree, and the normal degree of each vertex in the second vertex set may be adjusted to a corresponding second degree.
For example, the normal degree corresponding to the vertex on the line a of the side face is 85 °, the normal degree corresponding to the wiring of the side face is gradually decreased by 5 ° from the line a of the side face toward the eye angle direction, the normal degree corresponding to the first face wiring of the side face a line toward the eye angle direction is 80 °, and so on, the normal degree corresponding to the third face wiring is 70 °, and the normal degree corresponding to the fourth face wiring next to the eye angle is 65 °.
The third face layout line may also be referred to as a front side face turning part d line, and in some alternative embodiments, the normal angles corresponding to the vertex points on the nose part d line and the front side face turning part d line need to be consistent. The fourth face layout next to the canthus may also be called the front-side turning part e line, and in some alternative embodiments, the normal angles corresponding to the vertex points on the nose part e line and the front-side turning part e line need to be consistent. Based on this, after determining the normal angle corresponding to the vertex on the front side turning part d line and the front side turning part e line, the determined normal angle may be copied to the nasal side triangular region, i.e., the normal angle corresponding to the nasal part d line is adjusted to 70 °, and the normal angle corresponding to the nasal part e line is adjusted to 65 °.
Optionally, the process of calculating, based on the first degrees corresponding to the vertices in the first vertex set, the second degrees corresponding to the vertices in the second vertex set on the other face wirings may be implemented as: and calculating second degrees respectively corresponding to all vertexes in a second vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the first degrees respectively corresponding to all vertexes in the first vertex set.
The position relationship between the other face wirings and the preset type face wirings may be the number of wiring sections between the other face wirings and the preset type face wirings. Based on the number of wiring sections between the other face wiring and the preset type of face wiring and the first degree corresponding to each vertex in the first vertex set, the second degree corresponding to each vertex in the second vertex set on the other face wiring can be calculated.
In practical application, the number of wiring sections from one face wiring to another face wiring in two preset types of face wirings can be determined, and a plurality of other face wirings can be clamped between the two preset types of face wirings. Knowing the normal degrees corresponding to the two preset types of face wirings, the normal degree difference between the two preset types of face wirings can be calculated. And calculating a progressive gradient according to the normal degree difference and the number of the wiring sections. After the progressive degree is obtained, the normal degree corresponding to any face wiring can be determined according to the number of wiring sections between any face wiring in other face wirings and the face wiring with the maximum normal degree in two preset types of face wirings and the progressive degree. Wherein, the formula for calculating the progressive gradient is as follows:
progressive degree-normal degree difference-division-wiring section number
For example, as shown in fig. 22, the vertex on the mid-center c line corresponds to a normal of 0 ° and the vertices on the cheek and forehead b lines correspond to a normal of 30 °. There are 7 wiring segments from the c-line to the b-line, and thus the degree of progression is (30 ° -0 °) ÷ 7 ═ 4.286 °. The normal degree corresponding to the first face wiring adjacent to the line d is 25.714 ° from 30 ° to 4.286 °, 21.428 ° to … … ° from 25.714 ° to 4.286 °, and so on from the line d, and the normal degrees corresponding to the other face wirings sandwiched between the line c and the line b can be calculated.
For another example, if the normal degree corresponding to the e-line is 65 °, the normal degree corresponding to the b-line is 30 °, and there are 2 wiring segments from the e-line to the b-line, the normal degree corresponding to the face wiring is 65 ° - (65 ° -30 °)/÷ 2 ═ 47.5 °.
In addition, for the head in the second face model, a sphere may be created, and the normal degree corresponding to each vertex in the sphere is preset. As shown in fig. 23-24, the corresponding normal degrees of the sphere are conducted to the head. After conduction, the user can manually adjust the position of the z-axis so that the normal to the head and the normal to the forehead seamlessly join, as shown in fig. 25.
As shown in fig. 26 to 27, in order to produce better light and shadow effect, the face wirings of the eyebrow arch and the upper eyelid portion in the second face model need to be designed according to a strict rule, and the normal angles corresponding to the face wirings of the eyebrow arch and the upper eyelid portion need to be kept consistent. It should be noted that the shape of the shadow is directly hooked with the wiring of the human face, and the normal angle and the light and shadow effect are directly determined by the shape of the wiring of the human face.
The method can meet the requirements of clean and subjectively controllable light shadow of the shadow edge of the super-large close-up, can ensure that the real-time animation battle of the game has natural transition, and meets the light shadow effect of the xylonite animation.
The human face model under the real-time animation scene rendered by the method is suitable for the game battle and world exploration which run in full frames, and the game of 60 frames + does not generate the sense of incongruity. The invention can lead the light and shadow to stay for a long time in the light and shadow part which does not need the intermediate frame part of the human face model, and has transition and can not stay in the unattractive light and shadow part. Therefore, a first face model is generated, and the vertex normal corresponding to the first face model is adjusted to obtain the first face model which can generate natural transition and is in line with the xylonite animation effect and used for the real-time animation scene.
Since the light and shadow transition is routed according to the face wiring, the adjusted face wiring of the first face model is not suitable for animation production, especially animation with exaggerated expressions. Therefore, the adjusted normal information of the first face model needs to be baked on the second face model to form a normal map.
The method provided by the embodiment of the invention is also suitable for multi-detail levels (Leve l s of Deta i l, LOD) with more specifications. Therefore, in the battle and world exploration of real-time animation in games, the shadows of the human face model can be naturally transited and accord with the animation light and shadow effect. Even in the case of super close-up, the addition of the soft-edge processing can alleviate the phenomenon that the shadow edge is jagged in small parts.
By adopting the method and the device, the user can have experience feeling closer to the two-dimensional animation, better immersive experience can be realized, and the user can feel the charm of the original animation and the two-dimensional animation more easily. The quality of real-time animation in the game is improved from the picture, and the game can be longer in life and more vital.
By adopting the method and the device, under a real-time animation scene in a game, the normal of the vertex in the first face model can be adjusted, and then the normal information corresponding to the adjusted first face model is baked into the second face model to obtain the normal map. And finally, rendering processing is carried out based on the normal map and the second face model so as to obtain the light and shadow effect meeting the expectation. By adopting the method provided by the invention, the shadow effect can be avoided by reducing the shadow of the human face model or locking the light source. Furthermore, the problem that the human face model in the prior art is hardly influenced by physical illumination can be avoided by adopting the method and the device, so that the mismatching feeling of the human face model and the body part model caused by different illumination can be reduced by adopting the method and the device, and the texture of the animation picture is improved.
The shadow rendering apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these shadow rendering devices can each be configured using commercially available hardware components through the steps taught in this scheme.
Fig. 28 is a schematic structural diagram of a light and shadow rendering apparatus according to an embodiment of the present invention, and as shown in fig. 28, the apparatus includes:
an obtaining module 281, configured to obtain a first face model to be subjected to light and shadow effect rendering in a real-time animation scene in a game;
an adjusting module 282, configured to adjust a vertex normal in the first face model;
a generating module 283, configured to generate a normal map based on the adjusted first face model;
and a rendering module 284, configured to perform light and shadow effect rendering based on the normal map and a second face model to obtain a light and shadow effect of the second face model in the real-time animation scene in the game, where the second face model is obtained by adjusting a vertex normal of the face model in the cut scene animation.
Optionally, the adjusting module 282 is configured to:
acquiring a key animation frame;
determining a plurality of face wirings in the first face model based on the key animation frame, wherein each face wiring in the first face model is formed by connecting a plurality of vertexes together;
and adjusting the normal degree corresponding to each vertex on different face wirings in the first face model.
Optionally, the first face model comprises a left half face model and a right half face model, and the adjusting module 282 is configured to:
acquiring a key animation frame;
determining a plurality of face wirings in the right-half face model based on the key animation frame, wherein each face wiring in the right-half face model is formed by connecting a plurality of vertexes together;
adjusting the normal degree corresponding to each vertex on different face wirings in the right half face model;
and determining the adjusted first face model based on the adjusted right half face model.
Optionally, the adjusting module 282 is configured to:
copying the adjusted right half human face model;
the normal direction corresponding to each vertex in the copied face model is turned;
and determining the adjusted first face model based on the adjusted right half face model and the turned face model.
Optionally, the adjusting module 282 is configured to:
replacing the left half side face model in the first face model with the inverted face model;
combining the adjusted right half-side face model and the face model replacing the left half-side face model;
and welding vertexes, which meet the preset distance condition, in the combined face model to obtain the adjusted first face model.
Optionally, the generating module 283 is configured to:
acquiring normal information of the adjusted first face model;
baking the normal information into the second face model to generate a normal map.
Optionally, the adjusting module 282 is further configured to:
acquiring an initial second face model;
and adjusting the vertex normal in the initial second face model to obtain the second face model.
Optionally, the adjusting module 282 is configured to:
identifying a first vertex set positioned on a preset type of face wiring in the initial second face model;
determining a second vertex set on other face wirings except for the face wirings in the preset type in the initial second face model;
determining a first degree corresponding to each vertex in the first vertex set on the preset type of face wiring based on the corresponding relation between the vertex on the preset type of face wiring and the normal degree;
calculating second degrees corresponding to all vertexes in a second vertex set on the other face wiring based on first degrees corresponding to all vertexes in the first vertex set;
and adjusting the normal degree of each vertex in the first vertex set to be a corresponding first degree, and adjusting the normal degree of each vertex in the second vertex set to be a corresponding second degree.
Optionally, the adjusting module 282 is configured to:
and calculating second degrees respectively corresponding to all vertexes in a second vertex set positioned on the other face wirings on the basis of the position relation between the other face wirings and the preset type face wirings and the first degrees respectively corresponding to all vertexes in the first vertex set.
The apparatus shown in fig. 28 may perform the light and shadow rendering method provided in the embodiments shown in fig. 1 to fig. 27, and the detailed implementation process and technical effects are described in the embodiments, and are not repeated herein.
In one possible design, the structure of the shadow rendering apparatus shown in fig. 28 may be implemented as an electronic device, as shown in fig. 29, which may include: a processor 91, and a memory 92. Wherein the memory 92 has stored thereon executable code, which when executed by the processor 91, makes the processor 91 at least implement the shadow rendering method as provided in the foregoing embodiments shown in fig. 1 to 27.
Optionally, the electronic device may further include a communication interface 93 for communicating with other devices.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the light and shadow rendering method provided in the foregoing embodiments shown in fig. 1 to 27.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The light and shadow rendering method provided in the embodiment of the present invention may be executed by a certain program/software, the program/software may be provided by a network side, the electronic device mentioned in the foregoing embodiment may download the program/software into a local nonvolatile storage medium, and when it needs to execute the light and shadow rendering method, the program/software is read into a memory by a CPU, and then the CPU executes the program/software to implement the light and shadow rendering method provided in the foregoing embodiment, and an execution process may refer to the schematic in fig. 1 to fig. 27.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1.一种光影渲染方法,其特征在于,包括:1. a light and shadow rendering method, is characterized in that, comprises: 获取游戏中实时动画场景下待进行光影效果渲染的第一人脸模型;Obtain the first face model to be rendered with light and shadow effects in the real-time animation scene in the game; 对所述第一人脸模型中的顶点法线进行调整;adjusting the vertex normals in the first face model; 基于调整后的第一人脸模型生成法线贴图;Generate a normal map based on the adjusted first face model; 基于所述法线贴图和第二人脸模型进行光影效果渲染,以得到所述第二人脸模型在所述游戏中实时动画场景下的光影效果,其中,所述第二人脸模型是对过场动画中的人脸模型的顶点法线进行调整后得到的。Light and shadow effect rendering is performed based on the normal map and the second face model, so as to obtain the light and shadow effect of the second face model in the real-time animation scene in the game, wherein the second face model is a pair of The vertex normals of the face model in the cutscene are adjusted. 2.根据权利要求1所述的方法,其特征在于,所述对所述第一人脸模型中的顶点法线进行调整,包括:2. The method according to claim 1, wherein the adjusting the vertex normals in the first face model comprises: 获取关键动画帧;Get key animation frames; 基于所述关键动画帧,确定所述第一人脸模型中的多个人脸布线,所述第一人脸模型中的每个人脸布线是由多个顶点连接在一起构成的;Based on the key animation frame, determine a plurality of face wirings in the first face model, and each face wiring in the first face model is formed by connecting together a plurality of vertices; 对所述第一人脸模型中位于不同人脸布线上的各顶点分别对应的法线度数进行调整。Adjust the normal degrees corresponding to the vertices located on different face wirings in the first face model. 3.根据权利要求1所述的方法,其特征在于,所述第一人脸模型包括左半侧人脸模型和右半侧人脸模型,所述对所述第一人脸模型中的顶点法线进行调整,包括:3. The method according to claim 1, wherein the first face model comprises a left half face model and a right half face model, and the pair of vertices in the first face model Normals are adjusted, including: 获取关键动画帧;Get key animation frames; 基于所述关键动画帧,确定所述右半侧人脸模型中的多个人脸布线,所述右半侧人脸模型中的每个人脸布线是由多个顶点连接在一起构成的;Based on the key animation frame, determine a plurality of face wirings in the right half of the face model, and each face wiring in the right half of the face model is formed by connecting together a plurality of vertices; 对所述右半侧人脸模型中位于不同人脸布线上的各顶点分别对应的法线度数进行调整;Adjust the normal degrees corresponding to each vertex located on different face wirings in the right half face model; 基于调整后的右半侧人脸模型,确定调整后的第一人脸模型。Based on the adjusted right half face model, an adjusted first face model is determined. 4.根据权利要求3所述的方法,其特征在于,所述基于调整后的右半侧人脸模型,确定调整后的第一人脸模型,包括:4. The method according to claim 3, wherein determining the adjusted first face model based on the adjusted right half face model, comprising: 对调整后的右半侧人脸模型进行复制;Copy the adjusted right half face model; 对复制后的人脸模型中各顶点对应的法线方向进行翻转;Flip the normal direction corresponding to each vertex in the copied face model; 基于调整后的右半侧人脸模型和翻转后的人脸模型,确定调整后的第一人脸模型。Based on the adjusted right half face model and the flipped face model, an adjusted first face model is determined. 5.根据权利要求4所述的方法,其特征在于,所述基于调整后的右半侧人脸模型和翻转后的人脸模型,确定调整后的第一人脸模型,包括:5. The method according to claim 4, wherein, determining the adjusted first face model based on the adjusted right half face model and the flipped face model, comprising: 用翻转后的人脸模型替代所述第一人脸模型中的所述左半侧人脸模型;Replace the left half face model in the first face model with the flipped face model; 对调整后的右半侧人脸模型和替代所述左半侧人脸模型的人脸模型进行合并;Merging the adjusted right half face model and the face model that replaces the left half face model; 将合并后的人脸模型中满足预设距离条件的顶点进行焊接,得到调整后的第一人脸模型。The vertices meeting the preset distance condition in the merged face model are welded to obtain the adjusted first face model. 6.根据权利要求1所述的方法,其特征在于,所述基于调整后的第一人脸模型生成法线贴图,包括:6. The method according to claim 1, wherein the generating a normal map based on the adjusted first face model comprises: 获取调整后的第一人脸模型的法线信息;Obtain the normal information of the adjusted first face model; 将所述法线信息烘焙到所述第二人脸模型中,以生成法线贴图。The normal information is baked into the second face model to generate a normal map. 7.根据权利要求1所述的方法,其特征在于,在基于所述法线贴图和第二人脸模型进行光影效果渲染之前,所述方法还包括:7. The method according to claim 1, characterized in that, before performing light and shadow effect rendering based on the normal map and the second face model, the method further comprises: 获取初始的第二人脸模型;Get the initial second face model; 对所述初始的第二人脸模型中的顶点法线进行调整,得到所述第二人脸模型。Adjust the vertex normals in the initial second face model to obtain the second face model. 8.根据权利要求7所述的方法,其特征在于,所述对所述初始的第二人脸模型中的顶点法线进行调整,包括:8. The method according to claim 7, wherein the adjusting the vertex normals in the initial second face model comprises: 识别所述初始的第二人脸模型中位于预设类型的人脸布线上的第一顶点集合;Identifying the first set of vertices on the face wiring of the preset type in the initial second face model; 确定所述初始的第二人脸模型中除位于所述预设类型的人脸布线之外的其他人脸布线上的第二顶点集合;determining a second set of vertices in the initial second face model on other face wirings other than the preset type of face wiring; 基于预设的人脸布线上的顶点和法线度数的对应关系,确定位于所述预设类型的人脸布线上的所述第一顶点集合中各顶点分别对应的第一度数;Based on the corresponding relationship between the vertices and the normal degree on the preset face wiring, determine the first degree corresponding to each vertex in the first vertex set located on the preset type of face wiring; 基于所述第一顶点集合中各顶点分别对应的第一度数,计算位于所述其他人脸布线上的第二顶点集合中各顶点分别对应的第二度数;Based on the first degrees corresponding to the respective vertices in the first vertex set, calculate the second degrees corresponding to the respective vertices in the second vertex sets located on the other face wirings; 将所述第一顶点集合中各顶点的法线度数调整为对应的第一度数,以及将所述第二顶点集合中各顶点的法线度数调整为对应的第二度数。The normal degree of each vertex in the first vertex set is adjusted to the corresponding first degree, and the normal degree of each vertex in the second vertex set is adjusted to the corresponding second degree. 9.根据权利要求8所述的方法,其特征在于,所述基于所述第一顶点集合中各顶点分别对应的第一度数,计算位于所述其他人脸布线上的第二顶点集合中各顶点分别对应的第二度数,包括:9 . The method according to claim 8 , wherein calculating the second vertex set located on the other face wiring based on the first degrees corresponding to each vertex in the first vertex set. 10 . The second degree corresponding to each vertex, including: 基于所述其他人脸布线与所述预设类型的人脸布线之间的位置关系、以及所述第一顶点集合中各顶点分别对应的第一度数,计算位于所述其他人脸布线上的第二顶点集合中各顶点分别对应的第二度数。Based on the positional relationship between the other face wiring and the preset type of face wiring, and the first degrees corresponding to each vertex in the first vertex set, calculate the position on the other face wiring The second degree corresponding to each vertex in the second vertex set of . 10.一种光影渲染装置,其特征在于,包括:10. A light and shadow rendering device, comprising: 获取模块,用于获取游戏中实时动画场景下待进行光影效果渲染的第一人脸模型;The acquisition module is used to acquire the first face model to be rendered with light and shadow effects in the real-time animation scene in the game; 调整模块,用于对所述第一人脸模型中的顶点法线进行调整;an adjustment module for adjusting the vertex normals in the first face model; 生成模块,用于基于调整后的第一人脸模型生成法线贴图;A generation module for generating a normal map based on the adjusted first face model; 渲染模块,用于基于所述法线贴图和第二人脸模型进行光影效果渲染,以得到所述第二人脸模型在所述游戏中实时动画场景下的光影效果,其中,所述第二人脸模型是对过场动画中的人脸模型的顶点法线进行调整后得到的。A rendering module, configured to perform light and shadow effect rendering based on the normal map and the second face model, so as to obtain the light and shadow effect of the second face model in the real-time animation scene in the game, wherein the second face model The face model is obtained by adjusting the vertex normals of the face model in the cutscenes. 11.一种电子设备,其特征在于,包括:存储器、处理器;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如权利要求1-9中任一项所述的光影渲染方法。11. An electronic device, comprising: a memory and a processor; wherein, executable code is stored on the memory, and when the executable code is executed by the processor, the processor is executed The light and shadow rendering method according to any one of claims 1-9. 12.一种非暂时性机器可读存储介质,其特征在于,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被电子设备的处理器执行时,使所述处理器执行如权利要求1-9中任一项所述的光影渲染方法。12. A non-transitory machine-readable storage medium, wherein executable codes are stored on the non-transitory machine-readable storage medium, and when the executable codes are executed by a processor of an electronic device, The processor executes the light and shadow rendering method according to any one of claims 1-9.
CN202111486602.9A 2021-12-07 2021-12-07 Light and shadow rendering method, device, device and storage medium Pending CN114119851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111486602.9A CN114119851A (en) 2021-12-07 2021-12-07 Light and shadow rendering method, device, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111486602.9A CN114119851A (en) 2021-12-07 2021-12-07 Light and shadow rendering method, device, device and storage medium

Publications (1)

Publication Number Publication Date
CN114119851A true CN114119851A (en) 2022-03-01

Family

ID=80367468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111486602.9A Pending CN114119851A (en) 2021-12-07 2021-12-07 Light and shadow rendering method, device, device and storage medium

Country Status (1)

Country Link
CN (1) CN114119851A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972647A (en) * 2022-05-31 2022-08-30 北京大甜绵白糖科技有限公司 Model rendering method and device, computer equipment and storage medium
CN114998505A (en) * 2022-05-31 2022-09-02 北京大甜绵白糖科技有限公司 Model rendering method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019011147A1 (en) * 2017-07-10 2019-01-17 Oppo广东移动通信有限公司 Human face region processing method and apparatus in backlight scene
WO2019062852A1 (en) * 2017-09-29 2019-04-04 西安中兴新软件有限责任公司 Displaying content control method, device and computer readable medium
CN109598780A (en) * 2018-08-30 2019-04-09 广州多维魔镜高新科技有限公司 A kind of clothes 3D modeling method
CN111402385A (en) * 2020-03-26 2020-07-10 网易(杭州)网络有限公司 Model processing method and device, electronic equipment and storage medium
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111768488A (en) * 2020-07-07 2020-10-13 网易(杭州)网络有限公司 Processing method and device for virtual character face model
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112419430A (en) * 2020-05-28 2021-02-26 上海哔哩哔哩科技有限公司 Animation playing method and device and computer equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019011147A1 (en) * 2017-07-10 2019-01-17 Oppo广东移动通信有限公司 Human face region processing method and apparatus in backlight scene
WO2019062852A1 (en) * 2017-09-29 2019-04-04 西安中兴新软件有限责任公司 Displaying content control method, device and computer readable medium
CN109598780A (en) * 2018-08-30 2019-04-09 广州多维魔镜高新科技有限公司 A kind of clothes 3D modeling method
CN111402385A (en) * 2020-03-26 2020-07-10 网易(杭州)网络有限公司 Model processing method and device, electronic equipment and storage medium
CN112419430A (en) * 2020-05-28 2021-02-26 上海哔哩哔哩科技有限公司 Animation playing method and device and computer equipment
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111768488A (en) * 2020-07-07 2020-10-13 网易(杭州)网络有限公司 Processing method and device for virtual character face model
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972647A (en) * 2022-05-31 2022-08-30 北京大甜绵白糖科技有限公司 Model rendering method and device, computer equipment and storage medium
CN114998505A (en) * 2022-05-31 2022-09-02 北京大甜绵白糖科技有限公司 Model rendering method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112669447B (en) Model head portrait creation method and device, electronic equipment and storage medium
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
US10403036B2 (en) Rendering glasses shadows
CN114119851A (en) Light and shadow rendering method, device, device and storage medium
CN110197462A (en) A kind of facial image beautifies in real time and texture synthesis method
CN107302694B (en) Method, equipment and the virtual reality device of scene are presented by virtual reality device
CA2819660C (en) System and method to achieve better eyelines in cg characters
TW202019167A (en) Generating and modifying representations of objects in an augmented-reality or virtual-reality scene
KR20100050052A (en) Virtual glasses wearing method
CN107644228A (en) Image processing method
US20240203078A1 (en) Methods and systems for refining a 3d representation of a subject
JP5327866B2 (en) Glasses fitting simulation system, glasses fitting simulation method and program
KR20200022778A (en) Method and system for real-time generation of 3D avatar for virtual fitting
CN104240281A (en) Virtual reality head-mounted device based on Unity3D engine
CN106910240A (en) The generation method and device of a kind of real-time shadow
WO2016035724A1 (en) Program, information processing device, control method and recording medium
CN114119852B (en) Shadow rendering method, device, equipment and storage medium
WO2019138515A1 (en) Image forming device, eyeglass lens selection system, image forming method, and program
CN114549721B (en) A temporally stable neural rendering method for hair point clouds
Bao et al. Tex4d: Zero-shot 4d scene texturing with video diffusion models
CN114972647A (en) Model rendering method and device, computer equipment and storage medium
Dias et al. Creating infinite characters from a single template: How automation may give super powers to 3d artists
Chen et al. 4D Agnostic Real-Time Facial Animation Pipeline for Desktop Scenarios
Wang et al. P‐4.27: Modeling Eye Movement and Reflection in Virtual Environments for Eye Tracking
US20240289927A1 (en) Processor, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination