CN114155338B - Image rendering method, device and electronic device - Google Patents
Image rendering method, device and electronic device Download PDFInfo
- Publication number
- CN114155338B CN114155338B CN202111452005.4A CN202111452005A CN114155338B CN 114155338 B CN114155338 B CN 114155338B CN 202111452005 A CN202111452005 A CN 202111452005A CN 114155338 B CN114155338 B CN 114155338B
- Authority
- CN
- China
- Prior art keywords
- rendered
- color
- dimensional model
- light
- normal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 108
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000011521 glass Substances 0.000 claims abstract description 49
- 239000000463 material Substances 0.000 claims abstract description 26
- 230000006870 function Effects 0.000 claims description 28
- 238000002156 mixing Methods 0.000 claims description 16
- 239000007787 solid Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 abstract description 9
- 230000000007 visual effect Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 38
- 238000004364 calculation method Methods 0.000 description 16
- 230000014509 gene expression Effects 0.000 description 8
- 239000000203 mixture Substances 0.000 description 8
- 239000003086 colorant Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000190070 Sarracenia purpurea Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure relates to an image rendering method, an image rendering device and electronic equipment, which can improve the authenticity of image rendering and visual experience, and relates to the technical field of image processing. The method comprises the steps of obtaining a glass material object to be rendered corresponding to a background image, wherein the object to be rendered is a virtual object to be added on the background image, determining the virtual thickness of the object to be rendered according to a three-dimensional model of the object to be rendered, wherein the virtual thickness is used for representing the thickness between the incidence and the emergence of light rays of the object to be rendered, determining the rendering color of the object to be rendered when the light rays irradiate the background image through the three-dimensional model with the virtual thickness, wherein the rendering color is the color of the background image presented on the object to be rendered, and displaying the object to be rendered after the rendering on the background image according to the rendering color.
Description
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image rendering method, an image rendering device and electronic equipment.
Background
Glass is a common material in daily life and is widely used for manufacturing various living necessities and decorations. In a virtual three-dimensional scene, glass is also a material that appears at high frequencies, and is used to simulate the effects of refraction, reflection, etc. of glass on objects.
In the related art, a glass effect is generally achieved by using a map simulation method. For example, a Material Capture (MatCap) map is used to sample the MatCap map by using a normal line, so as to obtain an effect similar to the specular reflection of glass, and texture coordinates when a background picture is sampled are properly changed, so that an effect of simulating refraction and distortion of the background picture is achieved, a part of the background picture is penetrated through the transparency, and the transparent texture of glass is simulated. However, when the glass effect is obtained by the scheme, the direction of the mapping sampling is relatively fixed, the visual effect is poor when the object moves, the requirements of mapping on designers are high, and the mapping is complex to manufacture.
Disclosure of Invention
The disclosure provides an image rendering method, an image rendering device and electronic equipment, so as to at least solve the problem that the operation of rendering glass effects in the related art is too complex. The technical scheme of the present disclosure is as follows:
According to a first aspect of the embodiment of the present disclosure, an image rendering method is provided, which includes obtaining an object to be rendered of a glass material corresponding to a background image, wherein the object to be rendered is a virtual object to be added on the background image, determining a virtual thickness of the object to be rendered according to a three-dimensional model of the object to be rendered, wherein the virtual thickness is used for representing a thickness between incidence and emergence of the object to be rendered from light rays, determining a rendering color of the object to be rendered when the light rays irradiate the background image through the three-dimensional model with the virtual thickness, wherein the rendering color is a color of the background image presented on the object to be rendered, and displaying the object to be rendered after being rendered on the background image according to the rendering color.
By adopting the method and the device, the virtual thickness can be determined for the object to be rendered, and the color of the background image presented on the object to be rendered can be truly calculated through the virtual thickness when the light rays are emitted out of the object to be rendered and then irradiated on the background image, so that the rendering effect of the object to be rendered accords with the real situation. In addition, no additional mapping is required to be made on the glass material, so that manpower and time can be saved, and the rendering efficiency is improved.
In one possible implementation manner, the determining the virtual thickness of the object to be rendered according to the three-dimensional model of the object to be rendered comprises the steps of obtaining a back contour map of the three-dimensional model when the three-dimensional model is a hollow model, extracting a back normal line of the three-dimensional model according to the back contour map of the three-dimensional model, determining a mixed normal line of each vertex of the three-dimensional model according to the vertex normal line and the back normal line of each vertex, and determining the virtual thickness of each vertex according to the mixed normal line of each vertex and a human eye observation direction.
In one possible implementation, the determining the virtual thickness of the object to be rendered according to the three-dimensional model of the object to be rendered includes calculating a dot product of a vertex normal line at a vertex of the three-dimensional model and a human eye observation direction when the object to be rendered is solid, and taking a result of the calculation as the virtual thickness at the vertex of the object to be rendered.
In one possible implementation, the method for determining the rendering color of the object to be rendered comprises the steps of calculating the refraction color of the light on the background image when the light passes through the object to be rendered according to the virtual thickness of the three-dimensional model, and the edge color, wherein the refraction color is the color of the background image when the light passes through the object to be rendered and irradiates on the background image after refraction of the object to be rendered, the edge color is the color of the edge of the object to be rendered when the light irradiates on the object to be rendered, determining the reflection color of the background image when the light passes through the object to be rendered according to the reflection map of the three-dimensional model, wherein the reflection color is the color of the object to be rendered which reflects the light when the light irradiates on the object to be rendered, and determining the rendering color of the object to be rendered according to the refraction color and the edge color.
In one possible implementation manner, when the light irradiates the background image through the three-dimensional model with the virtual thickness, determining the rendering color of the object to be rendered comprises the steps of obtaining the refractive index of the object to be rendered, calculating the traveling length of the light when the light passes through the three-dimensional model according to the incident angle of the light entering the three-dimensional model and the virtual thickness of the three-dimensional model, calculating the emergent point when the light passes through the three-dimensional model according to the traveling length, and obtaining the color of a first target pixel point in the background image corresponding to the emergent point as the rendering color.
In a possible implementation manner, obtaining the color of a first target pixel point in a background image corresponding to the exit point comprises determining a normal direction at the exit point, determining an exit direction when light enters a first medium from the exit point according to the normal direction at the exit point, and determining the refractive indexes of the exit point and the first medium according to the exit direction, the exit point and the first medium, wherein the light irradiates the first target pixel point on the background image through the exit point, so as to obtain the color of the first target pixel point.
In one possible implementation manner, the surface of the three-dimensional model is a curved surface, and the determining of the normal direction at the exit point includes obtaining an incident point of light on the three-dimensional model, determining a circle center position corresponding to the surface of the three-dimensional model according to the normal at the incident point, and obtaining the normal direction at the exit point by using the circle center position and the exit point.
In one possible implementation, the calculating the exit point of the light passing through the three-dimensional model according to the travelling length comprises determining a first exit point of the light passing through the convex surface of the hollow model according to the travelling length when the three-dimensional model is a hollow model, and a second exit point of the light passing through the concave surface of the hollow model, wherein the second exit point is taken as the exit point of the light passing through the hollow three-dimensional model.
According to a second aspect of embodiments of the present disclosure, there is provided an image rendering apparatus including a scene determination module, a model determination module, a color determination module, and a rendering module.
Specifically, the scene determining module is used for obtaining an object to be rendered of the glass material corresponding to the background image. Wherein the object to be rendered is a virtual object to be added on the background image. The model determining module is used for determining the virtual thickness of the object to be rendered according to the three-dimensional model of the object to be rendered. The virtual thickness is used for representing the thickness between the incidence of the light rays and the emergence of the object to be rendered. The color determining module is used for determining the rendering color of the object to be rendered when the light irradiates the background image through the three-dimensional model with the virtual thickness. The rendering color is a color of a background image presented on an object to be rendered. The rendering module is used for rendering the object to be rendered according to the rendering color, and displaying the rendered object to be rendered on the background image.
In some embodiments, the model determining module may be configured to calculate, when the object to be rendered is solid, a dot product of a vertex normal at a vertex of the three-dimensional model and a human eye viewing direction, and take a result of the calculation as a virtual thickness at the vertex of the object to be rendered.
In some embodiments, the model determination module further includes a back profile acquisition module, a back normal acquisition module, a normal blending module, and a thickness determination module.
The back profile obtaining module is used for obtaining a back profile map of the three-dimensional model when the three-dimensional model is a hollow model. And the back normal acquiring module is used for extracting the back normal of the three-dimensional model according to the back contour map of the three-dimensional model. The normal mixing module is used for determining a mixed normal of each vertex of the three-dimensional model according to the vertex normal and the back surface normal of each vertex. The thickness determination module is used for determining the virtual thickness of each vertex through the mixed normal line of each vertex and the human eye observation direction.
In some implementations, the color determination module may include a refractive color determination module, a reflective color determination module, and a rendering color determination module.
The refraction color determining module is used for calculating refraction colors and edge colors of the background image when the light passes through the object to be rendered according to the virtual thickness of the three-dimensional model. The refraction color is a color of the background image presented on the object to be rendered when the light rays irradiate the background image after being refracted by the object to be rendered. The edge color is a color presented by the edge of the object to be rendered when the light irradiates the object to be rendered. The reflection color determining module is used for determining the reflection color of the light ray to the background image when the light ray passes through the object to be rendered through the reflection map of the three-dimensional model. The reflection color is a color which is reflected by the object to be rendered when the light irradiates the object to be rendered. The rendering color determining module is used for determining the rendering color of the object to be rendered by combining the refraction color, the reflection color and the edge color.
In some embodiments, the color determination module further includes a refractive index acquisition module, a light travel length calculation module, an exit point determination module, and an exit point color determination module. And the refractive index acquisition module is used for acquiring the refractive index of the object to be rendered.
And the light traveling length calculation module is used for calculating the traveling length of the light passing through the three-dimensional model according to the incident angle of the light entering the three-dimensional model, the refractive index and the virtual thickness of the three-dimensional model. And the emergent point determining module is used for calculating the emergent point when the light passes through the three-dimensional model according to the advancing length. And the emergent point color determining module is used for acquiring the color of the first target pixel point in the background image corresponding to the emergent point and taking the color as the rendering color.
In some embodiments, the exit point color determining module is further configured to determine a normal direction at the exit point, determine an exit direction of the light ray when the light ray enters the first medium from the exit point according to the normal direction at the exit point, and determine a first target pixel point of the background image irradiated by the light ray through the exit point according to the exit direction, the exit point, and a refractive index of the first medium, so as to obtain a color of the first target pixel point.
In some embodiments, the surface of the three-dimensional model is a curved surface, the exit point color determining module is used for obtaining an incident point of light on the three-dimensional model, determining a circle center position corresponding to the surface of the three-dimensional model according to a normal line at the incident point, and obtaining a normal line direction at the exit point by using the circle center position and the exit point.
In some embodiments, the exit point color determining module is configured to determine, when the three-dimensional model is a hollow model, a first exit point at which the light passes through a convex surface of the hollow model and a second exit point at which the light passes through a concave surface of the hollow model according to the travel length, where the second exit point is used as an exit point at which the light passes through the hollow three-dimensional model.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a processor, a memory for storing processor executable instructions, wherein the processor is configured to execute the instructions to implement the image rendering method of the first aspect and any one of the possible implementations thereof.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the image rendering method of any one of the above-described first aspects and any one of its possible implementations.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when run on an electronic device, causes the electronic device to perform the image rendering method of the first aspect and any one of its possible implementations.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
In the technical solution of the embodiment, on one hand, by determining a virtual thickness for a three-dimensional model of an object to be rendered of a glass material, and determining a rendering color by using a physical principle that light passes through the three-dimensional model with the thickness, the reality of rendering can be enhanced. On the other hand, the additional mapping is not required to be made on the glass material, so that manpower and time can be saved, and the rendering efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flowchart illustrating a method of image rendering, according to an exemplary embodiment;
FIG. 2 is a ray refraction schematic diagram illustrating an image rendering method according to an exemplary embodiment;
FIG. 3A is an effect diagram of an image rendering method according to an example embodiment;
FIG. 3B is another effect diagram of an image rendering method according to an example embodiment;
FIG. 4 is a block diagram of an image rendering device, according to an example embodiment;
fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The image rendering method in the embodiment of the disclosure can be applied to various electronic devices, such as computers, tablet computers, mobile phones, intelligent wearable devices and the like, and the embodiment is not limited in particular. For convenience of description, the following embodiments will be described by taking the electronic device as a mobile phone as an example.
Fig. 1 is a flowchart illustrating an image rendering method according to an exemplary embodiment, which may include the following steps, as shown in fig. 1.
In step S11, an object to be rendered of a glass material corresponding to the background image is obtained, wherein the object to be rendered is a virtual object to be added on the background image.
Camera-type applications on electronic devices such as cell phones are capable of rendering images, such as adding filters, adding photo frames, adding emoticons, etc. to the images. The magic expression is a rendering effect provided by shooting applications and is used for adding virtual ornaments to images.
In this embodiment, the image acquired by the camera is a background image. The object to be rendered is a virtual object, such as a virtual garment, a virtual ornament, or the like, which needs to be added to the background image. For example, when a user needs to add a virtual ornament to a currently photographed image (i.e., a background image), the user may select various magic expressions provided in the application, such as virtual clothes, virtual hats, and the like. When the user selects the added magic expression as the glass material, for example, the magic expression is a virtual glass ornament, and the magic expression is an object to be rendered.
In step S12, a virtual thickness of the object to be rendered is determined according to the three-dimensional model of the object to be rendered. The virtual thickness is a thickness used to characterize the object to be rendered between light incidence and light emergence.
The object to be rendered can be established by a designer and set attribute information of the glass material thereof, such as refractive index of the glass, glass thickness variation intensity and the like. The three-dimensional model of the object to be rendered is a mathematical model built for the object to be rendered. By way of example, the three-dimensional model may include vertex positions, texture coordinates, vertex normal directions, screen space texture coordinates of the object to be rendered. Wherein the vertex position refers to coordinates of vertices of the object to be rendered. The texture coordinates refer to coordinates of each pixel point in the texture map, and each vertex of the three-dimensional model may correspond to one texture coordinate, which is used to indicate the position of the vertex in the texture map. The vertex normal direction refers to the normal direction at the vertex. Screen space texture coordinates refer to the coordinates of the vertices on the screen that correspond. In addition, the object to be rendered may further include other parameters, such as illumination information, which is not limited in particular in this embodiment.
The image has information of four channels RGBA. The three RGB channels respectively store Red (Red), green (Green) and Blue (Blue), and the A channel stores the transparency of the image.
The edge on the other side of the model, i.e. the thickness of the model, can be determined by the product of the line of sight and the normal. The virtual thickness of the three-dimensional model of the object to be rendered may be determined by the line of sight and the normal. The visual line is the human eye observation direction, namely the direction from the human eye to the vertex of the three-dimensional model, the sign is-V, the vertex normal is N, and the virtual thickness of the three-dimensional model can be simulated by utilizing the human eye observation direction and the normal direction (namely the vertex normal) of the three-dimensional model, so that the method is simple and efficient. For example, a dot function may model the thickness of the model, and a dot function may calculate the dot product of two vectors. The virtual thickness of the three-dimensional model can be calculated by dot (V, N). Wherein V is the reverse direction of-V. Substituting the vertex normals of each vertex of the three-dimensional model into the dot function can obtain the virtual thickness of each vertex of the three-dimensional model. Or the thickness obtained by dot (V, N) calculation is used as the first thickness, and the thickness is adjusted on the basis of the first thickness to obtain more reasonable virtual thickness. For example, the absolute value of the first thickness may be calculated first, avoiding that the first thickness is negative, resulting in an error. And then increasing the difference of the absolute value of the first thickness by an exponential function mode, and taking the obtained final result as the virtual thickness of the three-dimensional model. The above process can be summarized as the following formula, speed=pow (abs (dots (V, N)), THICKNESSSTRENGTH, where abs function is a function of calculating absolute value, abs (dots (V, N)) is the absolute value of calculating the first thickness. The Pow function is the power THICKNESSSTRENGTH to calculate the absolute value, THICKNESSSTRENGTH is the glass thickness variation intensity, the parameter can be preset by human, and the larger the glass thickness variation intensity is, the more obvious the glass thickness variation is. The thick calculated by the above formula can be used as the virtual thickness of the three-dimensional model.
The object to be rendered may be hollow in shape, such as a water cup, or solid in shape, such as a glass sphere. If the object to be rendered is solid, the light ray passes through the object to be rendered only once in the incident and emergent process, and the dot product of the human eye observation direction and the normal line of the vertex calculated by the dot function can be the virtual thickness of the solid object to be rendered. If the object to be rendered is hollow, the light rays pass through the object to be rendered and undergo the process of incidence and emergence at least twice. The first emergent surface of the light is used as the front surface of the object to be rendered, and the second emergent surface of the light is used as the back surface of the object to be rendered. For hollow objects to be rendered, the virtual thickness requires a blend of the thicknesses of the front and back faces of the three-dimensional model. The method comprises the steps of obtaining a background contour map of a three-dimensional model when the three-dimensional model is a hollow model, sampling the back contour map of the three-dimensional model to obtain a back surface normal line of the three-dimensional model, determining a mixed normal line of each vertex of the hollow three-dimensional model according to the vertex normal line of each vertex and the back surface normal line corresponding to each vertex aiming at each vertex on the three-dimensional model, and calculating the virtual thickness of each vertex of the hollow model by combining the mixed normal line of the hollow three-dimensional model and the human eye observation direction.
In the above embodiment, the virtual thickness is calculated to combine the back profile of the three-dimensional model, and the back normal line and the front normal line (i.e., the vertex normal line) are mixed, so that the calculated equivalent normal line has the characteristics of the front and the back at the same time, thereby simulating the effect that the hollow model sees the back profile through the front, enabling the virtual thickness to cover the scene of the hollow model, and enriching the rendering effect.
Specifically, calculating the thickness of the back surface requires a back surface normal of the three-dimensional model, which can be sampled from a back surface contour map of the three-dimensional model. The back contour map is preset data when a designer builds a three-dimensional model, and is used for recording image information of the back of the three-dimensional model, such as colors of all pixel points of the back. For example, the back contour map may be sampled by TEXTURE D function, where the specific formula is back= TEXTURE D (side_color_tex, screen_uv), and TEXTURE D function includes two input parameters, where side_color_tex represents the back contour map and screen_uv is the screen space texture coordinate of a vertex. The information of the screen_uv point in the back contour map can be obtained through the formula and is expressed by back. The back normal is the information recorded by the RGB channels in the back contour information, that is, the information of the RGB channels in the back is the back normal. Illustratively, the difference in the values of the back surface normals can be increased by the formula back_n=back. Where back. Rgb is the back normal before adjustment.
After the back surface normal is obtained, the back surface normal back_n of each vertex is mixed through a mixing function, and the front surface normal (namely the vertex normal N of the solid three-dimensional model) is obtained, so that the mixed normal of each vertex of the hollow model after mixing is obtained. The mixing function is mix (x, y, a) which can linearly mix x and y, returning the result of x (1-a) +y. Where a is a mixed coefficient, which may be set according to practical situations, for example, a may be 0,0.5, etc. For example, when blending the back surface normal, the front surface normal, a may be the transparency of the back surface contour map, i.e., the information of the a-channel: back. The normal of the whole hollow model can be calculated through a formula n= normalize (mix_n, N, back. A)), wherein a mix function is used for mixing the mix_n, N, back. A, normalize is used for normalizing the value obtained by the mix, that is, converting the value range of the normal into 0 to 1, so that the value range of the normal vector can be ensured to be correct. The contours of the front side and the back side are integrated in the mixed normal, so that the effect of seeing the contours of the back side through the front side can be presented during rendering, and the effect of hollow glass can be simulated more truly.
And after the mixed normal of the hollow model is obtained, calculating the thickness of the hollow model by the thickness calculating method. Specifically, the virtual thickness of the hollow model is calculated by the formula speed_back=pow (inside_info.w, THICKNESSSTRENGTH), expressed by speed_back. Wherein, the inside_info.w is the absolute value of the dot product of the mixed normal N of the hollow model and the human eye observation direction V. In addition, after the virtual thickness of the hollow model is calculated, the virtual thickness of the hollow model can be mixed with the virtual thickness of the solid three-dimensional model, so that the reality of the virtual thickness of the hollow model is further improved. For example, the virtual thickness think1 of the blended hollow module is calculated by the formula think1=mix (thin, thin+thin_back, thin_back). Wherein, the thickness of the three-dimensional model is the thickness of the hollow model.
In step S13, it is determined that the light is irradiated on the background image through the three-dimensional model having the virtual thickness, and the rendering color of the object to be rendered is determined. The rendering color is a color that the background image presents on the object to be rendered.
The light rays can have refraction and reflection effects when passing through the three-dimensional model of the glass material, and the reflection intensity of the light rays is different when the thickness of the glass edge is changed. In the embodiment, the refraction color and the edge color of the three-dimensional model to the light can be calculated according to the physical principle according to the virtual thickness of the three-dimensional model, the reflection color of the light to the background image when the light passes through the object to be rendered is determined through the reflection map of the three-dimensional model, so that the refraction color, the edge color and the reflection color of the three-dimensional model are obtained, and the final rendering color is determined by combining the three color effects. The refraction color is a color that the background image presents on the object to be rendered when the light irradiates on the background image after passing through the refraction of the object to be rendered. The edge color is a color that an edge of an object to be rendered exhibits when light is irradiated on the object to be rendered. The reflection color is a color that the object to be rendered reflects light to present when the light irradiates on the object to be rendered. The effects of refraction, reflection and edges are integrated in the rendering color, so that the rendering of the object to be rendered is more in line with the effect of real glass, and the visual experience is improved.
The above-described process of determining the refractive color according to the virtual thickness will be described first. When the three-dimensional model is a solid model, light rays pass through the three-dimensional model through refraction only once. Specifically, the traveling length of the light passing through the three-dimensional model can be calculated according to the refractive index of the three-dimensional model, the incident angle of the light entering the three-dimensional model and the virtual thickness of the three-dimensional model, the emergent point of the light exiting the three-dimensional model can be determined according to the traveling length, the intersection point of the light passing through the emergent point and the background image of the three-dimensional model can be determined according to the emergent point, the intersection point is called a first target pixel point, and the color of the first target pixel point is the refractive color of the light on the three-dimensional model.
The advancing length of the light ray refers to the advancing length of the light ray when the light ray calculated by the light ray technology passes through the surface with the virtual thickness of the three-mode model. Ray tracing (RAY MARCHING) is one implementation of ray tracing for intersecting a ray with an object. Each time the light advances by a certain step length, detecting whether the current light is positioned on the surface of the object, and adjusting the advancing amplitude of the light until the current light reaches the surface of the object.
When the light reaches a surface of the three-dimensional model, an intersection point with the surface is used as an incident point. When the light is emitted out of one surface of the three-mode model, the intersection point of the light and the surface is taken as an emergent point. The travel length may be a length of light traveling between the incident point and the exit point.
The incident point is a point when a light preset by a ray tracing technology reaches the three-dimensional model. Ray tracing (RAY TRACING), which is a special rendering algorithm in three-dimensional computer graphics, tracks rays emitted from eyes instead of light emitted from a light source, displays a mathematical model of a well-organized scene, and presents a rendering effect which more truly accords with the physical principle.
For example, referring to FIG. 2, the human eye viewing direction-V for the three-dimensional model 200 is shown as incident light, and the intersection of the incident light with the three-dimensional model 200 is shown as incident point P0. The angle theta 0 between the incident light and the normal is the incident angle, and the angle theta 1 between the refracted light V1 and the normal is the refraction angle.
According to the law of refraction of light rays, the incident angle θ 0, the refraction angle θ 1, the refractive index n0 of the medium 0 through which the incident light rays pass, and the refractive index n1 of the medium 1 through which the incident light rays pass have the following relationship:
n0×sinθ0=n1×sinθ1
For example, the medium 0 may be air, and the medium 1 is an object to be rendered made of glass in this embodiment. When light enters an object to be rendered from the air, an incident angle theta 0 can be determined according to the human eye observation direction-V and the normal N, and the incident angle theta 1 can be determined by substituting the incident angle, the refractive index of the air and the refractive index of the glass material into the refraction law. The cosine value of the refraction angle theta 1 is the ratio of the thickness thick of the glass material to the travel length travel path of the light in the glass, and the travel length travel path of the light passing through the glass can be calculated according to cos theta 1.
Specifically, the refractive direction can be derived from the following formula, ray_out0= normalize (refract (-V, N01)). Where n01 is the ratio of the refractive index n0 of medium 0 to the refractive index n1 of medium 1 when light enters medium 1 from medium 0. refract functions are refractive functions for calculating the exit direction from the incident direction, normal, refractive index ratio. The normalize function is used for normalization processing, and the emergent direction is normalized into a unit vector, so that the direction calculation is ensured to be correct. The refraction angle theta 1 can be calculated by the formula of theta1 = asin (sin (achv)). N01, wherein nohv is the cosine value corresponding to the ratio of the normal line to the human eye observation direction-V, namely the incident angle, acos function is used for calculating the angle corresponding to the cosine value, namely the incident angle, sin (achs (nohv)). N01 can be used for calculating the sine value of the refraction angle, and the asin function can be used for calculating the angle corresponding to the sine value, namely the refraction angle theta 1. After calculating the refraction angle θ 1, the unit distance of the light ray in the three-dimensional model can be calculated by the formula: travel_dist0= REFRSTRETCH/cos (theta 1). Wherein REFRSTRETCH is the light stepping length, i.e., the length of light passing through a unit thickness of glass. By the formula travel path = ray out0 travel dis0 travel length can be calculated as the light passing through the glass of virtual thickness. The travel length is the product of the direction of the refracted ray, the length of the refracted ray, and the virtual thickness.
The position of the exit point of the light ray when the light ray exits the three-dimensional model can be determined according to the advancing length of the light ray in the three-dimensional model and the position of the incident point. For example, the position out_p0 of the exit point P1 can be calculated by the formula out_p0= worldpos +travel_path. Wherein worldpos is the coordinate of the incident point P0, and travel_path is the travel length. After the exit point of the light passing through the three-dimensional model is calculated, the intersection point of the exit point and the background image can be determined and used as a first target pixel point. The first target pixel point is a first target pixel point which can be observed by human eyes from an incident point when light enters the three-dimensional model, and then the color of the first target pixel point can be rendered to the incident point P0 in the three-dimensional model.
The light rays can enter another medium, such as air, after exiting from the interior of the three-dimensional model. With continued reference to fig. 2, after the light beam enters the three-dimensional model from the point P0 of the three-dimensional model, the light beam enters the first medium from the point P1, where the point P1 may be used as the point of incidence of the light beam in the first medium. Similarly, by the method for calculating the travel length of the light ray in the three-dimensional model, the travel length of the light ray in the first medium can be calculated. The method comprises the steps of calculating the normal line of an emergent point of a three-dimensional model, determining the emergent direction according to the normal line of the emergent point, and calculating the travelling distance of the light in a second medium according to the refractive index of the first medium, wherein the end point of the travelling distance is the intersection point of the light passing through the emergent point and a background image. By performing ray tracing in the first medium with a certain thickness and the three-dimensional model, the refraction effect of the light on the background image after passing through the three-dimensional model and the first medium can be simulated, and the reality of the refraction effect is improved.
For example, the surface of the three-dimensional model may be a curved surface, and the normal direction at the exit point P1 may be a direction from the center of the circle at the exit point to the exit point. When the surface of the three-dimensional model is a curved surface, the refraction of light is stronger, and the visual experience can be enabled to have a more obvious refraction effect. Specifically, firstly, an incident point of a light ray on a three-dimensional model is obtained, then, the circle center position of a corresponding curved surface at an emergent point P1 is determined according to the normal line at the incident point, and the normal line direction at the emergent point P1 can be obtained by utilizing the circle center position and the emergent point P1. The center position origin of the exit point can be calculated by the formula origin= worldpos +n× RefrStepCount × REFRSTRETCH. Wherein worldpos is the coordinate of the incident point, REFRSTRETCH is the light stepping length, refrStepCount is the number of light stepping times, and N is the normal direction of the three-dimensional model. After the circle center position is determined, the direction from the circle center to the exit point P1 may be taken as the normal direction of the exit point P1. The normal direction at the exit point P1 is obtained by the formula n0= normalize (origin-out_p0). Wherein, the center position of the origin, out_p0 is the position of the exit point, and normalize functions are used for normalization processing.
The exit direction can be determined from the normal direction of the exit point P1. Similarly, the method for determining the refraction direction is the same as that of the method for determining the refraction direction, and the normal direction N0 and the refractive index ratio of the three-dimensional model to the first medium calculated by substituting refract functions into the human eye observation direction-V can obtain the emergent direction of the light. For example, the exit direction ray_out1 can be calculated by the formula ray_out1= normalize (refract (ray_ou.t0, N0, N12)), where N12 is the ratio of the refractive index of the three-dimensional model to the refractive index of the first medium. The exit angle θ 2 can be further calculated using the exit direction. By way of example, the exit angle can be derived by the formula theta2 = asin (sin (acos (dot (-ray_out0, N0))) N12. Wherein-ray_out0 is the inverse of the exit direction and the dot function is used to calculate the thickness of the first medium.
After the exit angle is calculated, the distance traveled by the light in the first medium is calculated by the formula of travel_dist1= REFRSTRETCH/cos (theta 1). And then determining an intersection point P2 of the light passing through the emergent direction and the background image, namely a first target pixel point, according to the distance of the light in the first medium. For example, the coordinates of the first target pixel point P2 are determined by the formula intersect _uv1= GetIntersectPointUV (ray_out1, velocity_dist1×thick, out_p0). Wherein ray_out1 is the exit direction, velocity_dist1 is the distance of the light ray in the first medium after the thickness is increased, and out_p0 is the coordinates of the exit point. The GetIntersectPointUV function can calculate the coordinates of the light passing through the exit point P1, passing through the first medium in the exit direction, and reaching the first target pixel point P2 on the background image.
After the first target pixel point is determined, sampling the background image to obtain the color of the first target pixel point on the background image. For example, the color of the first target pixel point, i.e., the RGB value, can be obtained by the formula refr_clr1= TEXTURE2D (photo_tex, intersect _uv1). The photo_tex represents a background image, intersect _uv1 is the coordinate of the first target pixel, and the calculated refr_clr is the color of the first target pixel, that is, the RGB value. The color of the first target pixel point P2 may be used as a refractive color of the incident point P0 on the three-dimensional model, and similarly, each pixel point on the three-dimensional model is used as an incident point, so as to obtain a refractive color of each pixel point on the three-dimensional model, thereby rendering the three-dimensional model. The refraction color obtained by calculation in the mode synthesizes the back profile of the three-dimensional model, so that the refraction effect of the three-dimensional model on light rays can be truly simulated.
When the refraction effect of the real glass is generally realized, the reflected light becomes an incident light after the real light is subjected to incidence and reflection every time, and the emergent light after refraction also becomes the next incident light. Therefore, the calculation mode according to the real physical principle involves complex parameters and huge quantity, repeated iterative calculation is needed, the calculated quantity is large, and the requirement on the equipment performance is too high. In the embodiment, the virtual thickness synthesizes the outline of the back surface of the model, and the refraction effect of the hollow model can be obtained only by single calculation when the refraction color is calculated through the virtual thickness, so that the method is simple and efficient, multiple iterations are not needed, and the complexity is greatly reduced. In addition, the efficiency can be improved, the requirement on the equipment performance is reduced, and the use of a mobile terminal can be met.
In the above embodiment, in the case where the three-dimensional model is a solid model, the intersection point between the light ray and the background image after being refracted once when passing through the three-dimensional model is calculated, that is, the first target pixel point. In order to simulate the refraction effect of the hollow model on light rays, the three-dimensional model is assumed to be a hollow model. The light rays are refracted at least twice in the hollow three-dimensional model, the refraction colors corresponding to the two times of refraction can be calculated, and the final refraction color of the three-dimensional model is determined by combining the refraction colors of the two times of refraction. The hollow model comprises a convex surface and a concave surface, and if the convex surface is a front surface and the concave surface is a back surface, light rays are refracted once on the front surface and refracted once on the back surface. And taking the color of the first target pixel point as a refraction color corresponding to convex refraction. And then determining a second target pixel point according to the method for determining the first pixel point, and taking the color of the second target pixel point as the refraction color corresponding to concave refraction, namely an emergent point when the light finally exits the hollow three-dimensional model.
Specifically, when the three-dimensional model is a hollow model, the first exit point of the light ray on the convex surface, that is, the above exit point P1, and the second exit point of the light ray on the concave surface may be determined according to the traveling length of the light ray in the three-dimensional model. As shown in the formula out_p0= worldpos +travel_path, the first exit point is the sum of the entry point and the travel length. The second exit point may be a difference between the incident point and the travel length, for example, the second exit point out_p1= worldpos-travel_path, where worldpos is the coordinate of the incident point P0 and travel_path is the travel length. That is, in a hollow three-dimensional model, the exit point of the convex surface is the sum of the entry point and the travel length, and the exit point of the concave surface is the difference between the entry point and the travel length. The refraction effect of the hollow model can be simulated, so that the rendering effect is enriched. And when the hollow model is used, the refraction of two different planes is integrated, so that the accuracy of the refraction color can be improved.
And determining a second target pixel point corresponding to the second exit point according to the second exit point, and mixing the color of the first target pixel point with the color of the second target pixel point to obtain a result which is used as the refraction color of the three-dimensional model. By way of example, the color of the first target pixel point may be mixed with the color of the second pixel point by a mixing function mix, such as the mixed color refr_clr=mix (refr_clr1, refr_clr2, doubleRefrMix), where refr_clr1 is the color of the first target pixel point, refr_clr2 is the color of the second target pixel point, doubleRefrMix is a double refraction mixing coefficient, which may be predetermined by the designer. The mixed color is used as the refraction color of the three-dimensional model.
The convex surface in the hollow three-dimensional model can enlarge the background image, and the concave surface can reduce the background image. The refraction color of the three-dimensional model determined by the embodiment mixes the refraction effect of the concave surface and the convex surface, can simulate the refraction effect of the hollow model, and enhances the richness and the authenticity of rendering.
The surface of the glass material is usually very smooth, the reflection of the ambient light is specular reflection, and the reflection color can be expressed by the reflection map. The reflection map may include a panorama or matcap map, which may be pre-designed by the designer. When rendering an object to be rendered, the reflection map can be sampled to obtain the color stored in the reflection map as the reflection color of the object to be rendered.
In some embodiments, the color of the edge of the three-dimensional model of the object to be rendered may be adjusted to enhance or attenuate the color of the edge, simulating the effect of sudden enhancement of refraction and reflection at the edge of the glass material, so that rendering is more realistic. Illustratively, the edges of the three-dimensional model are determined and then the color at the edges is adjusted. The edge of the three-dimensional model, such as dot (N, V), can be determined through the dot function, and the normal line of the three-dimensional model and the observation direction of human eyes are used as input parameters, so that the calculation result of the dot function, namely the edge of the three-dimensional model, can be obtained. The light intensity of the color at the edge is then enhanced within a certain range, for example, an enhancement coefficient is set, and the light intensity at the edge is adjusted to be the result of multiplying the enhancement coefficient, thereby obtaining the edge color.
The final rendering color of the three-dimensional model can be obtained by combining the determined refraction color, reflection color and edge color. Illustratively, the addition of the refraction color, the reflection color, and the edge color may be used as the final rendering color.
In step S14, the object to be rendered is rendered according to the rendering color, and the rendered object to be rendered is displayed on the background image.
The shader in the mobile phone can perform layer-by-layer rendering on the object to be rendered, and the shader can acquire the determined edge color and render the rendered color to the object to be rendered. In addition, the object to be rendered can be rendered in other modes, for example, the object to be rendered is subjected to material capturing, the material information of the object to be rendered is obtained from the texture map of the object to be rendered, the light source information is obtained, and then the light source and the material are rendered on the object to be rendered.
The rendered object to be rendered can be displayed at a corresponding position of the background image, and the effect of adding the virtual object to be rendered to the background image is simulated. As shown in fig. 3A, a virtual object 301 to be rendered may be displayed in a background image 300 photographed by a camera, and the object 301 to be rendered may be a virtual glass. In fig. 3A, when light passes through the glass 301, the glass 301 forms refraction, reflection, and other effects on the light, so that a background image observed by human eyes through the glass 301 shows real refraction, reflection, and other effects. As the viewing direction of the human eye (i.e., the camera direction) changes, so does the effect presented on the virtual glass 301 in the background image. When light is irradiated through the glass 301 onto the cartoon toy 302 in the background image, the cartoon toy 302 exhibits the effect as shown in FIG. 3B. Therefore, when the object to be rendered of the glass material is rendered, the method provided by the embodiment can render the relatively real glass refraction and reflection effects, so that the rendering of the glass material is more real and accurate. In addition, the embodiment does not need to perform complex physical calculation for many times, the effect of light multi-refraction obtained by one-time calculation can be realized through the virtual thickness, the requirement on the equipment performance can be reduced, and the energy consumption is reduced.
For example, the object to be rendered may be at a specific position of the background image, and the position may be flexibly updated according to the content of the background image. Taking the object to be rendered as a magic expression as an example, the magic expression can be fixedly displayed at a specific position such as the left side, the right side or the upper side of the background image, or the position of a specific object such as a human face in the background image can be detected, and the magic expression is displayed around the human face, and the like, and the implementation is not particularly limited.
In the above embodiment, the refraction color of the object to be rendered of the glass material can be determined according to the physical principle, and the refraction color, the reflection color and the edge color are combined to determine the final rendering color, so that the rendering effect is more consistent with the refraction of the real glass to the light, the reflection effect is more realistic, and the visual experience of the user can be improved.
Fig. 4 is a block diagram illustrating an image rendering apparatus according to an exemplary embodiment. Referring to fig. 4, the image rendering apparatus 400 includes a scene determination module 410, a model determination module 420, a color determination module 430, and a rendering module 440. The image rendering apparatus 400 may be used to perform the above-described image rendering method, for example, the scene determination module 410 may be used to perform the above-described step S11, the model determination module 420 may be used to perform the above-described step S12, the color determination module 430 may be used to perform the above-described step S13, and the rendering module 440 may be used to perform the above-described step S14.
Specifically, the scene determining module 410 is configured to obtain an object to be rendered of a glass material corresponding to the background image. Wherein the object to be rendered is a virtual object to be added on the background image. The model determining module 420 is configured to determine a virtual thickness of the object to be rendered according to the three-dimensional model of the object to be rendered. The virtual thickness is used for representing the thickness between the incidence of the light rays and the emergence of the object to be rendered. The color determination module is used for determining the rendering color of the object to be rendered when the light irradiates on the background image through the three-dimensional model with the virtual thickness. The rendering color is a color that the background image presents on the object to be rendered. The rendering module 440 is configured to render the object to be rendered according to the rendering color, and display the rendered object to be rendered on the background image.
In some implementations, the model determination module 420 may be configured to calculate a dot product of a vertex normal at a vertex of the three-dimensional model and a human eye viewing direction when the object to be rendered is solid, and take a result of the calculation as a virtual thickness at the vertex of the object to be rendered.
In some embodiments, model determination module 420 further includes a back profile acquisition module, a back normal acquisition module, a normal blending module, and a thickness determination module.
The back profile acquisition module is used for acquiring a back profile map of the three-dimensional model when the three-dimensional model is a hollow model. The back normal acquisition module is used for extracting the back normal of the three-dimensional model according to the back contour map of the three-dimensional model. The normal mixing module is used for determining the mixed normal of each vertex of the three-dimensional model according to the vertex normal and the back surface normal of each vertex. The thickness determination module is used for determining the virtual thickness of each vertex through the mixed normal line of each vertex and the human eye observation direction.
In some implementations, the color determination module 430 may include a refractive color determination module, a reflective color determination module, and a rendering color determination module.
The refraction color determining module is used for calculating refraction colors of the light rays to the background image and edge colors when the light rays pass through the object to be rendered according to the virtual thickness of the three-dimensional model. The refraction color is a color of the background image presented on the object to be rendered when the light irradiates the background image after passing through the refraction of the object to be rendered. The edge color is a color that an edge of an object to be rendered exhibits when light is irradiated on the object to be rendered. The reflection color determination module is used for determining the reflection color of the light ray to the background image when the light ray passes through the object to be rendered through the reflection map of the three-dimensional model. The reflection color is a color that the object to be rendered reflects light to present when the light irradiates on the object to be rendered. The rendering color determination module is used for determining the rendering color of the object to be rendered by combining the refraction color, the reflection color and the edge color.
In some embodiments, the color determination module 430 further includes a refractive index acquisition module, a light travel length calculation module, an exit point determination module, and an exit point color determination module. And the refractive index acquisition module is used for acquiring the refractive index of the object to be rendered.
And the light traveling length calculation module is used for calculating the traveling length of the light passing through the three-dimensional model according to the incident angle of the light entering the three-dimensional model, the refractive index and the virtual thickness of the three-dimensional model. And the emergent point determining module is used for calculating the emergent point when the light passes through the three-dimensional model according to the travelling length. The emergent point color determining module is used for obtaining the color of the first target pixel point in the background image corresponding to the emergent point and taking the color as the rendering color.
In some embodiments, the exit point color determining module is further configured to determine a normal direction at the exit point, determine an exit direction of the light ray when entering the first medium from the exit point according to the normal direction at the exit point, determine a first target pixel point of the light ray irradiated onto the background image through the exit point according to the exit direction, the exit point, and a refractive index of the first medium, and obtain a color of the first target pixel point.
In some embodiments, the surface of the three-dimensional model is a curved surface, the exit point color determining module is used for obtaining an incident point of light on the three-dimensional model, determining a circle center position corresponding to the surface of the three-dimensional model according to a normal line at the incident point, and obtaining a normal line direction at the exit point by using the circle center position and the exit point.
In some embodiments, the exit point color determining module is configured to determine, when the three-dimensional model is a hollow model, a first exit point at which the light passes through a convex surface of the hollow model and a second exit point at which the light passes through a concave surface of the hollow model according to the traveling length, where the second exit point is taken as an exit point at which the light passes through the hollow three-dimensional model.
With respect to the image rendering apparatus in the above-described embodiment, a specific manner in which the respective modules perform operations has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
The specific manner in which the various modules perform operations has been described in detail in connection with embodiments of the image rendering method, and will not be described in detail herein.
The present disclosure also provides an electronic device that is applicable to performing the above-described image rendering method. Fig. 5 is a schematic structural diagram of an electronic device 500 provided in the present disclosure. As shown in fig. 5, the electronic device 500 may include at least one processor 501 and a memory 503 for storing instructions executable by the processor 501. Wherein the processor 501 is configured to execute instructions in the memory 503 to implement the image rendering method in the above-described embodiments.
In addition, electronic device 500 may also include communication bus 502 and at least one communication interface 504.
The processor 501 may be a GPU, a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of the programs of the present disclosure.
Communication bus 502 may include a path to transfer information between the aforementioned components.
Communication interface 504, using any transceiver-like means for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 503 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, a compact disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and be connected to the processing unit by a bus. The memory may also be integrated with the processing unit as a volatile storage medium in the GPU.
Wherein the memory 503 is used to store instructions for performing the disclosed aspects and is controlled for execution by the processor 501. The processor 501 is configured to execute instructions stored in the memory 503 to implement the functions of the image rendering method of the present disclosure.
In a particular implementation, as one embodiment, processor 501 may include one or more GPUs, such as GPU0 and GPU1 in fig. 5.
In a particular implementation, electronic device 500 may include multiple processors, such as processor 501 and processor 507 in FIG. 5, as one embodiment. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-GPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, electronic device 500 may also include an output device 505 and an input device 506, as one embodiment. The output device 505 communicates with the processor 501 and may display information in a variety of ways. For example, the output device 505 may be a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 506 communicates with the processor 501 and may accept user input in a variety of ways. For example, the input device 506 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
Those skilled in the art will appreciate that the structure shown in fig. 5 is not limiting of the electronic device 500 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The present disclosure also provides a computer-readable storage medium having instructions stored thereon, which when executed by a processor of a server, enable the server to perform the image rendering method provided by the embodiments of the present disclosure described above.
The disclosed embodiments also provide a computer program product containing instructions that, when run on a server, cause the server to perform the image rendering method provided by the disclosed embodiments described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111452005.4A CN114155338B (en) | 2021-11-30 | 2021-11-30 | Image rendering method, device and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111452005.4A CN114155338B (en) | 2021-11-30 | 2021-11-30 | Image rendering method, device and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114155338A CN114155338A (en) | 2022-03-08 |
CN114155338B true CN114155338B (en) | 2024-12-10 |
Family
ID=80455348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111452005.4A Active CN114155338B (en) | 2021-11-30 | 2021-11-30 | Image rendering method, device and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114155338B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018966A (en) * | 2022-06-24 | 2022-09-06 | 网易(杭州)网络有限公司 | Virtual model rendering method, device, electronic device and storage medium |
CN116421970B (en) * | 2023-06-12 | 2023-12-05 | 腾讯科技(深圳)有限公司 | Exterior rendering method, device, computer equipment and storage medium for virtual objects |
CN117475050B (en) * | 2023-10-08 | 2025-02-14 | 粒界(上海)信息科技有限公司 | Vehicle model rendering method, device, storage medium and electronic device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855655A (en) * | 2012-08-03 | 2013-01-02 | 吉林禹硕动漫游戏科技股份有限公司 | Parallel ray tracing rendering method based on GPU (Graphic Processing Unit) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007029446A1 (en) * | 2005-09-01 | 2007-03-15 | Matsushita Electric Industrial Co., Ltd. | Image processing method, image processing device, and image processing program |
CN104966312B (en) * | 2014-06-10 | 2017-07-21 | 腾讯科技(深圳)有限公司 | A kind of rendering intent, device and the terminal device of 3D models |
-
2021
- 2021-11-30 CN CN202111452005.4A patent/CN114155338B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855655A (en) * | 2012-08-03 | 2013-01-02 | 吉林禹硕动漫游戏科技股份有限公司 | Parallel ray tracing rendering method based on GPU (Graphic Processing Unit) |
Also Published As
Publication number | Publication date |
---|---|
CN114155338A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114155338B (en) | Image rendering method, device and electronic device | |
CN112200902B (en) | Image rendering method and device, electronic equipment and storage medium | |
CN113674389B (en) | Scene rendering method and device, electronic equipment and storage medium | |
CN106056658B (en) | A kind of virtual object rendering method and device | |
WO2022116659A1 (en) | Volumetric cloud rendering method and apparatus, and program and readable medium | |
CN101882323B (en) | Microstructure surface global illumination real-time rendering method based on height map | |
CN112819941B (en) | Method, apparatus, device and computer readable storage medium for rendering water surface | |
JP7609517B2 (en) | Image rendering method, device, equipment and medium | |
CN107016719B (en) | A Real-time Rendering Method of Subsurface Scattering Effect in Screen Space | |
CN110084873B (en) | Method and apparatus for rendering three-dimensional model | |
US20230368459A1 (en) | Systems and methods for rendering virtual objects using editable light-source parameter estimation | |
CN107644453A (en) | A kind of rendering intent and system based on physical colored | |
US9659404B2 (en) | Normalized diffusion profile for subsurface scattering rendering | |
CN112446943A (en) | Image rendering method and device and computer readable storage medium | |
US6975319B1 (en) | System, method and article of manufacture for calculating a level of detail (LOD) during computer graphics processing | |
CN115063645B (en) | Model training method, map determination method, device, and equipment | |
CN116524101A (en) | Global illumination rendering method and device based on auxiliary buffer information and direct illumination | |
US11436794B2 (en) | Image processing method, apparatus and device | |
US20230196666A1 (en) | Three-dimensional shading method, apparatus, and computing device, and storage medium | |
CN117974856A (en) | Rendering method, computing device and computer-readable storage medium | |
CN115631289A (en) | Vehicle model surface generation method, system, equipment and storage medium | |
CN117333598B (en) | A 3D model rendering system and method based on digital scenes | |
CN116112657B (en) | Image processing method, image processing device, computer readable storage medium and electronic device | |
US20250029324A1 (en) | Image drawing method and apparatus, and electronic device and storage medium | |
CN117618892A (en) | Vegetation model generation method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |