[go: up one dir, main page]

CN112734896A - Environment shielding rendering method and device, storage medium and electronic equipment - Google Patents

Environment shielding rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112734896A
CN112734896A CN202110024576.1A CN202110024576A CN112734896A CN 112734896 A CN112734896 A CN 112734896A CN 202110024576 A CN202110024576 A CN 202110024576A CN 112734896 A CN112734896 A CN 112734896A
Authority
CN
China
Prior art keywords
rendering
model
sampling points
projection
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110024576.1A
Other languages
Chinese (zh)
Other versions
CN112734896B (en
Inventor
吴黎辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110024576.1A priority Critical patent/CN112734896B/en
Publication of CN112734896A publication Critical patent/CN112734896A/en
Application granted granted Critical
Publication of CN112734896B publication Critical patent/CN112734896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure relates to the field of image processing, and in particular, to an ambient occlusion rendering method, an ambient occlusion rendering apparatus, a computer-readable storage medium, and an electronic device. The environment shielding rendering method comprises the steps of determining pixel points of a target object in a projection space to mark the pixel points as sampling points; projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points; and determining an environment shielding rendering value of the sampling point, and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map. The environment shielding rendering method can reduce the cost of the CPU while simulating the environment shielding effect.

Description

Environment shielding rendering method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an ambient occlusion rendering method, an ambient occlusion rendering apparatus, a computer-readable storage medium, and an electronic device.
Background
In real-time rendering applications, the light and dark contrast is lost under the feet when the character is inside a room or large area shadow. For a PC or game console, this problem is generally solved by using the environmental shielding effect of the screen space, but in mobile platforms, due to the limitations of performance and heat generation, a more lightweight solution is required.
In the prior art, two methods are generally used to solve this problem:
one is to use a patch, which is drawn using an alpha blend with a circular texture by creating a patch under the character's foot. However, when the ground is sloped or uneven, the method may cause wrong shielding relation, or soft insert into the hard edge of the ground in a soft particle manner, but the shadow effect is poor.
The other method is to perform shadow rendering by means of projector, firstly, the models in the projector are cut out through the scene of the projector, and then the models are transmitted into the projection matrix of the projector to be rendered again, so as to obtain the sample uv. However, this method requires a CPU to perform scene clipping once, consuming CPU performance, and in addition, the projected model needs to be rendered once again, which results in a large CPU overhead if the number of model surfaces is large.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to an environmental masking rendering method, an environmental masking rendering apparatus, a computer-readable storage medium, and an electronic device, which aim to reduce the overhead of a CPU while simulating an environmental masking effect.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the embodiments of the present disclosure, there is provided an ambient occlusion rendering method, including: determining pixel points of a target object in a projection space to mark as sampling points; projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points; and determining an environment shielding rendering value of the sampling point, and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
According to some embodiments of the present disclosure, based on the foregoing scheme, the determining that a pixel point of the target object located in the projection space is marked as a sampling point includes: acquiring a rendering model corresponding to a target object located in a camera coordinate space; calculating a visual scene model according to the rendering model; and rendering the visual scene model and carrying out depth test, and taking the pixel points marked in the projection space of the visual scene model as the sampling points.
According to some embodiments of the present disclosure, based on the foregoing scheme, the rendering the view model and performing a depth test to mark a pixel point in a projection space of the view model as the sampling point includes: rendering a first surface of the visual scene model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value; rendering a second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value; and marking the pixel point in the projection space of the visual scene model according to the second template numerical value as the sampling point.
According to some embodiments of the disclosure, based on the foregoing scheme, the first side of the view volume model includes a side facing the camera or a side away from the camera.
According to some embodiments of the present disclosure, based on the foregoing solution, the method further comprises: calculating world coordinates of the sample point, including: rendering the visual scene model to obtain depth values of the sampling points; and calculating the world coordinates of the sampling points according to the depth values of the sampling points.
According to some embodiments of the present disclosure, based on the foregoing solution, the projecting the sampling point to a perspective space through perspective transformation and determining a texture map based on world coordinates of the sampling point includes: projecting the sampling points to a perspective space through perspective transformation by using a projection assembly to obtain a perspective matrix; calculating a projection map according to the perspective matrix and the world coordinates of the sampling points; and sampling shadow textures, and generating the texture map by the projection map according to the shadow textures.
According to some embodiments of the present disclosure, based on the foregoing scheme, the determining the ambient occlusion rendering value of the sampling point includes: calculating the height difference of the sampling points according to the projection position coordinates and the world coordinates of the sampling points; setting an environment shielding gradual change distance; and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
According to a second aspect of the embodiments of the present disclosure, there is provided an ambient occlusion rendering apparatus including: the marking module is used for determining pixel points of the target object in the projection space to mark as sampling points; the projection module is used for projecting the sampling points to a perspective space through perspective transformation and determining a texture mapping based on world coordinates of the sampling points; and the drawing module is used for determining an environment shielding rendering value of the sampling point and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the environment shield rendering method as in the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the ambience shadow rendering method as in the above embodiments.
Exemplary embodiments of the present disclosure may have some or all of the following benefits:
in the technical solutions provided in some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation and a texture map is calculated, and finally an environment-shielded image of the target object is drawn according to the texture map. On one hand, the situation that the visual scene of the target object is cut once can be avoided by marking the pixel points in the projection space as sampling points, the environment shielding effect is guaranteed, and meanwhile, CPU calculation is omitted, so that the method can be applied to the limitation of performance reduction and heating of a mobile terminal; on the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and calculation of simulating environment shielding is performed, so that rendering of all projection models is avoided, and good environment shielding effect is achieved when the number of model surfaces is large or complex.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates a schematic diagram of an ambient occlusion rendering method using patches in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of an ambient occlusion rendering method in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a view model in an exemplary embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating a sampling point in a projection space of a visual model according to an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a composition diagram of an ambient occlusion rendering device in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure;
fig. 7 schematically shows a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In real-time rendering, the underfoot shadow of a character can be resolved by an environmental masking effect. Fig. 1 is a schematic diagram schematically illustrating an environment shading rendering method using a patch according to an exemplary embodiment of the present disclosure, where a circular texture is used to take an alpha blend for drawing by creating a patch under a character foot as shown in fig. 1. However, when the ground is sloped or uneven, the method may cause wrong shielding relation, or soft insert into the hard edge of the ground in a soft particle manner, but the shadow effect is poor.
The other method is to use the projector mode to perform shadow rendering, firstly, the models in the projector are cut out through the scene of the projector, then the models are transmitted into the projection matrix of the projector to be rendered again to obtain the sampling uv, but the method needs the CPU to perform the scene cutting once, the CPU performance is consumed, in addition, the projected models need to be rendered again, and if the number of the model surfaces is huge, a relatively large CPU overhead is caused.
In view of the problems in the related art, the present disclosure provides an ambient occlusion rendering method, which aims to provide a lighter weight solution suitable for mobile terminals with limited performance and heat generation. Implementation details of the technical solution of the embodiments of the present disclosure are set forth in detail below.
Fig. 2 schematically illustrates a flowchart of an ambient occlusion rendering method in an exemplary embodiment of the present disclosure, and as shown in fig. 2, the ambient occlusion rendering method includes steps S1 to S3:
s1, determining pixel points of the target object in the projection space to mark as sampling points;
s2, projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points;
and S3, determining the environment shielding rendering value of the sampling point, and rendering the environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
In the technical solutions provided in some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation and a texture map is calculated, and finally an environment-shielded image of the target object is drawn according to the texture map. On the one hand, the pixels in the projection space are marked as sampling points, so that the visual scene of the target object can be prevented from being cut once, the environment shielding effect is guaranteed, and meanwhile, the CPU calculation is omitted, so that the method can be applied to the limitation of performance reduction and heating of the mobile terminal. On the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and calculation of simulating environment shielding is performed, so that rendering of all projection models is avoided, and good environment shielding effect is achieved when the number of model surfaces is large or complex.
Hereinafter, each step of the environment shading rendering method in the present exemplary embodiment will be described in more detail with reference to the drawings and the embodiments.
In step S1, the pixel points of the target object located in the projection space are determined to be marked as sampling points.
In an embodiment of the present disclosure, the determining that a pixel point of the target object located in the projection space is marked as a sampling point includes:
step S11: and acquiring a rendering model corresponding to the target object positioned in the camera coordinate space.
Specifically, when the target object is an image for which an environment occlusion (AO) effect needs to be drawn, for example, when an underfoot shadow is drawn, the target object is both feet of the character. The target object is located in the world coordinate space, the world coordinate system is an absolute coordinate system of the system, and the position is fixed, so that the target object cannot be presented on the screen.
And converting the target object located in the world coordinate space into the camera coordinate space through the coordinate system to obtain the target object in the camera coordinate space. The camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as the Z axis.
And obtaining a rendering model corresponding to the target object after converting the rendering model into the camera coordinate space. The camera carries out conventional rendering to obtain a rendering model, wherein the rendering model comprises: position coordinates of the target object in the camera coordinate space and depth information. The position coordinates are coordinates in a camera coordinate space obtained after the target object is converted into the camera coordinate space; the depth information can be obtained from a depth buffer area, the depth buffer area is used for distinguishing the layers where colors are located, the shielded colors are prevented from being displayed, and the depth value of each pixel point in the rendering model from an image collector (camera) to each pixel point in a scene is stored.
There are many methods in the prior art for converting a world coordinate system into a camera coordinate system and obtaining a rendering model in a camera coordinate space to obtain position coordinates and depth information, which will not be described in detail herein.
Step S12: and calculating a visual scene model according to the rendering model.
In particular, the position and orientation of the camera is defined in terms of the camera coordinate space, but the field of view of the camera is not infinite, and a view volume must be created for it, objects within the view volume (i.e., within the projection space) will be projected to the viewing plane, and objects not within the view volume will be discarded from processing.
Fig. 3 schematically illustrates a view model in an exemplary embodiment of the present disclosure. Three-dimensional graphics are usually projected in perspective, and for perspective projection, as shown in fig. 3, the view model is a quadrangular frustum, 301 is a virtual camera in the camera coordinate space, i.e., the projection center, 302 is a Near cut plane (Near) of the view model, and 303 is a Far cut plane (Far) of the view model.
It should be noted that the view model needs to be calculated according to the rendering model, and a rendering model projection bounding box corresponding to the rendering target object can be used as the view through a Shadow Volume (Shadow Volume) technology, so that the view can surround the rendering model corresponding to the target object.
Step S13: and rendering the visual scene model and carrying out depth test, and taking the pixel points marked in the projection space of the visual scene model as the sampling points.
In one embodiment of the present disclosure, the pixel points marked in the projection space of the visual scene model may adopt a Shadow Volume algorithm. The rendering the view model and performing a depth test to mark a pixel point in a projection space of the view model as the sampling point may include the following steps:
step S131, rendering a first surface of the visual scene model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value;
step S132, rendering a second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value;
and step S133, according to the second template numerical value, marking the pixel point in the projection space of the visual body model as the sampling point.
The depth test is to compare the depth of the current pixel with the depth of the pixel in the corresponding depth buffer. In the present disclosure, a template buffer is set to calculate the depth test result, so as to mark the pixel points in the projection space of the visual scene model.
In one embodiment of the present disclosure, the first side of the model of view may be the front side of the model of view, i.e. the side facing the camera; the second side of the view model may be the opposite side of the view model, i.e. the side far away from the camera, and then step S13 may specifically include the following steps:
step S131, rendering the first surface of the visual scene model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value.
Specifically, the front of the visual model is rendered first and a depth test is performed: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, the template buffer area is increased by degrees, and the value of the template is added by 1; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, and the template value is unchanged. And carrying out depth test on each pixel point on the rendering surface to sequentially obtain a first template value of each pixel point, wherein the color does not need to be output at the moment.
Step S132, rendering the second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value.
Specifically, rendering the reverse side of the visual model and performing depth testing: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is decreased progressively, and the template value is decreased by 1. And performing depth test on each pixel point on the rendering surface, calculating a template value based on the first template value to obtain a second template value, and at the moment, not outputting the color.
And step S133, according to the second template numerical value, marking the pixel point in the projection space of the visual body model as the sampling point.
And if the second template value is greater than 0, marking the pixel points in the projection space as sampling points, and outputting colors for watching the marking details.
Fig. 4 schematically illustrates a schematic diagram of a sampling point in a projection space of a view volume model in an exemplary embodiment of the disclosure, as shown in fig. 4, for example, an environment shading effect under feet needs to be drawn, 401 is a virtual camera, 402 is a near clipping plane, 403 is a marking plane, a sampling point inside the view volume, that is, a ground plane where a shadow needs to be drawn, may be marked, and 404 is a far clipping plane.
In one embodiment of the present disclosure, the first side of the model of view may be the opposite side of the model of view, i.e. the side facing away from the camera; the second side of the view model may be the front side of the view model, i.e. the side facing the camera, and then step S13 may specifically include the following steps:
step S131, rendering the first surface of the visual scene model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value.
Specifically, firstly, rendering the reverse side of the visual scene model and performing depth test: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is increased in an increasing mode, and the value of the template is increased by 1. And carrying out depth test on each pixel point on the rendering surface to sequentially obtain a first template value of each pixel point, wherein the color does not need to be output at the moment.
Step S132, rendering the second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value.
Specifically, the front of the visual model is rendered and subjected to a depth test: if the depth value of the current pixel point is smaller than the depth buffer area, the depth test is passed, and the template value is unchanged; if the depth value of the current pixel point is larger than the depth buffer area, the depth test fails, the template buffer area is decreased progressively, and the template value is decreased by 1. And performing depth test on each pixel point on the rendering surface, calculating a template value based on the first template value to obtain a second template value, and at the moment, not outputting the color.
Step S133, according to the pixel point of the second template numerical value mark in the visual body model projection space, as the sampling point, as shown in fig. 4, the marked sampling point is in the 403 mark plane.
And if the second template value is greater than 0, the pixel points are in the projection space, the pixel points are marked as sampling points, and the color is output.
The pixel points in the projection space are marked through the Shadow Volume algorithm, so that the situation that the CPU performs one-time view body cutting can be avoided, and the consumption of the CPU is reduced.
In step S2, the sample points are projected to the perspective space by perspective transformation, and a texture map is calculated based on the world coordinates of the sample points.
Step S20: the world coordinates of the sample points are calculated.
In one embodiment of the present disclosure, calculating the world coordinates of the sample points comprises: rendering the visual scene model to obtain depth values of the sampling points; and calculating the world coordinates of the sampling points according to the depth values of the sampling points.
It should be noted that only the depth values of the sampling points are obtained. Setting the test state of the template buffer area, that is, testing the depth values of the sampling points with the template value greater than 0, rendering the front side of the scene model, and obtaining the depth values of the sampling points according to the depth information in step S11.
After the depth values of the sampling points are obtained, the world coordinates of the sampling points are calculated based on the depth values of the sampling points. In the prior art, there are many methods for calculating world coordinates based on depth values, and the disclosure is not limited herein.
For example, Depth buffering (Depth Buffer) may be employed to construct world coordinates. Specifically, the depth value is converted into a linear depth value in a perspective view, and then the world coordinates of sampling points are calculated according to the linear depth value.
Step S21: and projecting the sampling points to a perspective space through perspective transformation by using a projection component to obtain a perspective matrix.
In an embodiment of the present disclosure, the projection component may be world to Projector in Unity Projector, and the projection component may project the sampling point to a perspective space, which is a two-dimensional coordinate space, for presenting the target object and the corresponding environmental shielding effect.
And projecting the sampling points to the perspective space by adopting perspective transformation, projecting the sampling points to the perspective space from a camera coordinate space, and taking a corresponding transformation matrix as a perspective matrix.
Step S22: and calculating a projection map according to the perspective matrix and the world coordinates of the sampling points.
Specifically, the world coordinates of the sampling points are transmitted into a perspective matrix corresponding to the world to projector, and the world coordinates of each sampling point are converted into perspective coordinates in a corresponding perspective space through the change of the perspective matrix, so that the world coordinates are used as the coordinates of the projection map required by the sampling texture.
Step S23: and sampling shadow textures, and generating the texture map by the projection map according to the shadow textures.
Wherein the shadow texture is used for simulating the environmental occlusion (AO), and can be preset in advance, such as a circle or an ellipse. The shadow structure of the shadow texture is sampled and then a texture map is generated from the projection map.
When the texture map is calculated in step S2, only the depth of the sampling point obtained by rendering the view model is calculated, and then the subsequent calculation is performed, so that only the sampling points inside the view are calculated when the model is rendered, that is, 8 vertices of the near clipping plane and the far clipping plane are calculated, and 12 triangles formed by the virtual camera, the near clipping plane, the far clipping plane and the mark plane are calculated. In addition, when the number of model surfaces is large and complex, the method provided by the disclosure still has lower CPU overhead.
In step S3, an ambient occlusion rendering value of the sampling point is determined, and an ambient occlusion image corresponding to the target object is rendered according to the ambient occlusion rendering value and the texture map.
In an embodiment of the present disclosure, step S3 specifically includes the following:
step S31: calculating the height difference of the sampling points according to the projection position coordinates and the world coordinates of the sampling points;
specifically, the height difference is calculated from the projectopos in the incoming projection assembly by the following method:
HeightOffset=projectorPos.y-worldPos.y (1)
the height offset is a height difference, the projetorpos.y is a y-axis coordinate of a projection position coordinate of the sampling point in the world coordinate space, and the wordpos.y is a y-axis coordinate of the world coordinate of the sampling point in the world coordinate space.
Step S32: setting an environment shielding gradual change distance;
specifically, the environmental shielding gradual change distance represents a distance corresponding to complete disappearance of the environmental shielding of the target object, and may be set as needed. Taking the example of drawing the environmental shielding effect under the foot, the environmental shielding (AO) completely disappears after the environmental shielding gradual change distance indicates how far the foot is away.
Step S33: and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
Specifically, the method for calculating the environment shading rendering value is as follows:
AOfade=pow2(1-saturate(HeightOffset/fadeDistance)) (2)
wherein, AOfadeFor the environment mask rendering value, height offset is height difference, surface distance is environment mask gradient distance, and power and saturrate are built-in functions of Unity Shader in the projection component.
Step S34: and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
Specifically, a pixel shader (pixel shader) of the projection component renders a graph on a rasterized two-dimensional screen according to the environment shading rendering value and the texture map by using an alpha blend function to draw an environment shading image of the target object, wherein the environment shading image is located in a two-dimensional screen space and is displayed at a terminal.
It should be noted that the execution sequence of step S31 and step S32 is not limited, step S31 may be executed first to calculate the height difference of the sampling points, or step S32 may be executed first to set the environmental shielding gradual change distance.
In one embodiment of the present disclosure, an environment shading rendering method includes the following steps: carrying out depth sampling according to a SampleDepth function to obtain a depth value of a sampling point; constructing world coordinates based on depth values of the sampling points to obtain worldPos of the sampling points; transmitting the projection matrix into a perspective projection matrix calculation map uv of the worldToprojector of the projector, namely a projection map; sampling shadow textures to obtain shadow maps shadow; calculating AO (AO), namely an environment shielding rendering value according to the height difference and the environment shielding gradual change distance; and rendering an environment shielding image corresponding to the target object by adopting alpha blend for image drawing.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In the technical solutions provided in some embodiments of the present disclosure, on one hand, a pixel point of a target object in a projection space is marked as a sampling point, then the sampling point is projected to the perspective space through perspective transformation and a texture map is calculated, and finally an environment-shielded image of the target object is drawn according to the texture map. On the one hand, the pixels in the projection space are marked as sampling points, so that the visual scene of the target object can be prevented from being cut once, the environment shielding effect is guaranteed, and meanwhile, the CPU calculation is omitted, so that the method can be applied to the limitation of performance reduction and heating of the mobile terminal. On the other hand, when the environment shielding image is drawn, only the sampling points are projected to the perspective space, and calculation of simulating environment shielding is performed, so that rendering of all projection models is avoided, and good environment shielding effect is achieved when the number of model surfaces is large or complex.
Fig. 5 schematically illustrates a composition diagram of an ambient occlusion rendering apparatus in an exemplary embodiment of the disclosure, and as shown in fig. 5, the ambient occlusion rendering apparatus 500 includes a marking module 501, a projection module 502, and a drawing module 503. Wherein:
the marking module 501 is configured to determine a pixel point of a target object located in a projection space to mark as a sampling point;
a projection module 502, configured to project the sampling point to a perspective space through perspective transformation, and determine a texture map based on world coordinates of the sampling point;
and a drawing module 503, configured to determine an environmental shielding rendering value of the sampling point, and render an environmental shielding image corresponding to the target object according to the environmental shielding rendering value and the texture map.
According to an exemplary embodiment of the present disclosure, the marking module 501 includes: the system comprises a rendering unit, a view unit and a test unit (not shown in the figure), wherein the rendering unit is used for acquiring a rendering model corresponding to a target object positioned in a camera coordinate space; the view volume unit is used for calculating a view volume model according to the rendering model; the test unit is used for rendering the visual scene model and carrying out depth test so as to mark pixel points in the projection space of the visual scene model as the sampling points.
According to an exemplary embodiment of the disclosure, the test unit is configured to render a first surface of the view model to obtain a first rendering model, perform a depth test on the first rendering model, and calculate a first template value; rendering a second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value; and marking the pixel point in the projection space of the visual scene model according to the second template numerical value as the sampling point.
According to an exemplary embodiment of the present disclosure, the projection module 502 further includes a world coordinate unit (not shown in the figure) for calculating world coordinates of the sampling points, including: rendering the visual scene model to obtain depth values of the sampling points; and calculating the world coordinates of the sampling points according to the depth values of the sampling points.
According to an exemplary embodiment of the present disclosure, the projection module 502 includes a projection unit, a projection mapping unit, and a texture mapping unit (not shown in the figure), the projection unit is configured to project the sampling points to a perspective space through perspective transformation by using a projection component to obtain a perspective matrix; the projection mapping unit is used for calculating projection mapping according to the perspective matrix and the world coordinates of the sampling points; the texture mapping unit is used for sampling shadow textures and generating the texture mapping by mapping the projection mapping according to the shadow textures.
According to an exemplary embodiment of the present disclosure, the drawing module 503 includes a parameter unit for calculating a height difference of the sampling point according to the projection position coordinates and the world coordinates of the sampling point, and a calculation unit (not shown in the drawings); setting an environment shielding gradual change distance; the calculation unit is used for calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
According to an exemplary embodiment of the present disclosure, the first side of the view volume model includes a side facing the camera or a side facing away from the camera.
The specific details of each module in the above-mentioned ambient occlusion rendering apparatus 500 have been described in detail in the corresponding ambient occlusion rendering method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In an exemplary embodiment of the present disclosure, there is also provided a storage medium capable of implementing the above-described method. Fig. 6 schematically illustrates a schematic diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure, and as shown in fig. 6, a program product 600 for implementing the above method according to an embodiment of the disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a mobile phone. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. Fig. 7 schematically shows a structural diagram of a computer system of an electronic device in an exemplary embodiment of the disclosure.
It should be noted that the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, the ROM702, and the RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs various functions defined in the system of the present disclosure.
It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An ambient occlusion rendering method, comprising:
determining pixel points of a target object in a projection space to mark as sampling points;
projecting the sampling points to a perspective space through perspective transformation, and determining a texture map based on world coordinates of the sampling points;
and determining an environment shielding rendering value of the sampling point, and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
2. The ambient occlusion rendering method of claim 1, wherein the determining that the pixel point of the target object located in the projection space is marked as a sampling point comprises:
acquiring a rendering model corresponding to a target object located in a camera coordinate space;
calculating a visual scene model according to the rendering model;
and rendering the visual scene model and carrying out depth test, and taking the pixel points marked in the projection space of the visual scene model as the sampling points.
3. The ambient occlusion rendering method of claim 2, wherein the rendering the visual scene model and performing a depth test to mark a pixel point in a projection space of the visual scene model as the sampling point comprises:
rendering a first surface of the visual scene model to obtain a first rendering model, performing depth test on the first rendering model, and calculating a first template value;
rendering a second surface of the visual scene model to obtain a second rendering model, performing depth test on the second rendering model, and calculating a second template value based on the first template value;
and marking the pixel point in the projection space of the visual scene model according to the second template numerical value as the sampling point.
4. The ambient occlusion rendering method of claim 3, wherein the first side of the visual scene model comprises a side facing the camera or a side facing away from the camera.
5. The ambient occlusion rendering method of claim 1, further comprising: calculating world coordinates of the sample point, including:
rendering the visual scene model to obtain depth values of the sampling points;
and calculating the world coordinates of the sampling points according to the depth values of the sampling points.
6. The ambient occlusion rendering method of claim 1, wherein the projecting the sample points to a perspective space through a perspective transformation and determining a texture map based on world coordinates of the sample points comprises:
projecting the sampling points to a perspective space through perspective transformation by using a projection assembly to obtain a perspective matrix;
calculating a projection map according to the perspective matrix and the world coordinates of the sampling points;
and sampling shadow textures, and generating the texture map by the projection map according to the shadow textures.
7. The ambient mask rendering method of claim 1, wherein the determining the ambient mask rendering values for the sample points comprises:
calculating the height difference of the sampling points according to the projection position coordinates and the world coordinates of the sampling points; and
setting an environment shielding gradual change distance;
and calculating the environment shielding rendering value according to the height difference and the environment shielding gradual change distance.
8. An ambient occlusion rendering apparatus, comprising:
the marking module is used for determining pixel points of the target object in the projection space to mark as sampling points;
the projection module is used for projecting the sampling points to a perspective space through perspective transformation and determining a texture mapping based on world coordinates of the sampling points;
and the drawing module is used for determining an environment shielding rendering value of the sampling point and rendering an environment shielding image corresponding to the target object according to the environment shielding rendering value and the texture map.
9. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the ambience shadow rendering method of any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the ambience shadow rendering method of any one of claims 1 to 7.
CN202110024576.1A 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment Active CN112734896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110024576.1A CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110024576.1A CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112734896A true CN112734896A (en) 2021-04-30
CN112734896B CN112734896B (en) 2024-04-26

Family

ID=75591413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110024576.1A Active CN112734896B (en) 2021-01-08 2021-01-08 Environment shielding rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112734896B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782612A (en) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 Image rendering method, device, electronic device and storage medium
CN115063517A (en) * 2022-06-07 2022-09-16 网易(杭州)网络有限公司 Flash effect rendering method and device in game, storage medium and electronic equipment
CN116051713A (en) * 2022-08-04 2023-05-02 荣耀终端有限公司 Rendering method, electronic device, and computer-readable storage medium
WO2023207356A1 (en) * 2022-04-29 2023-11-02 北京字跳网络技术有限公司 Image rendering method and apparatus, device, and storage medium
CN117793442A (en) * 2023-12-29 2024-03-29 深圳市木愚科技有限公司 Image video masking method, device, equipment and medium based on point set

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347419A2 (en) * 2002-03-21 2003-09-24 Microsoft Corporation Graphics image rendering with radiance self-transfer for low-frequency lighting environments
CN102592305A (en) * 2011-09-06 2012-07-18 浙江大学 Adaptive Screen Space Ambient Occlusion Method
CN103345771A (en) * 2013-06-28 2013-10-09 中国科学技术大学 Efficient image rendering method based on modeling
CN104134230A (en) * 2014-01-22 2014-11-05 腾讯科技(深圳)有限公司 Image processing method, image processing device and computer equipment
US20150187129A1 (en) * 2014-01-02 2015-07-02 Nvidia Corporation Technique for pre-computing ambient obscurance
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo
WO2017206325A1 (en) * 2016-05-30 2017-12-07 网易(杭州)网络有限公司 Calculation method and apparatus for global illumination
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107730578A (en) * 2017-10-18 2018-02-23 广州爱九游信息技术有限公司 The rendering intent of luminous environment masking figure, the method and apparatus for generating design sketch
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods
KR20180138458A (en) * 2017-06-21 2018-12-31 에스케이텔레콤 주식회사 Method for processing 3-d data
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347419A2 (en) * 2002-03-21 2003-09-24 Microsoft Corporation Graphics image rendering with radiance self-transfer for low-frequency lighting environments
CN102592305A (en) * 2011-09-06 2012-07-18 浙江大学 Adaptive Screen Space Ambient Occlusion Method
CN103345771A (en) * 2013-06-28 2013-10-09 中国科学技术大学 Efficient image rendering method based on modeling
US20150187129A1 (en) * 2014-01-02 2015-07-02 Nvidia Corporation Technique for pre-computing ambient obscurance
CN104134230A (en) * 2014-01-22 2014-11-05 腾讯科技(深圳)有限公司 Image processing method, image processing device and computer equipment
WO2017206325A1 (en) * 2016-05-30 2017-12-07 网易(杭州)网络有限公司 Calculation method and apparatus for global illumination
CN107452048A (en) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 The computational methods and device of global illumination
KR20180138458A (en) * 2017-06-21 2018-12-31 에스케이텔레콤 주식회사 Method for processing 3-d data
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo
CN107730578A (en) * 2017-10-18 2018-02-23 广州爱九游信息技术有限公司 The rendering intent of luminous environment masking figure, the method and apparatus for generating design sketch
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
S. HERHOLZ ET AL.: "Screen Space Spherical Harmonic Occlusion", 《VISION, MODELING, AND VISUALIZATION》, pages 71 - 78 *
傲娇的露易丝: "Unity2018 Shader Graph 学习笔记(八) 水波涟漪的实现", pages 1 - 9, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/36793215?utm_source=ZHShareTargetIDMore> *
杨志成;: "一种改进的屏幕空间环境光遮蔽(SSAO)算法", 现代计算机(专业版), no. 08, pages 41 - 44 *
赵兴旺: "蛋白质表面模型渲染中的屏幕空间环境光遮蔽算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)基础科学辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782612A (en) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 Image rendering method, device, electronic device and storage medium
WO2023207356A1 (en) * 2022-04-29 2023-11-02 北京字跳网络技术有限公司 Image rendering method and apparatus, device, and storage medium
CN115063517A (en) * 2022-06-07 2022-09-16 网易(杭州)网络有限公司 Flash effect rendering method and device in game, storage medium and electronic equipment
CN116051713A (en) * 2022-08-04 2023-05-02 荣耀终端有限公司 Rendering method, electronic device, and computer-readable storage medium
CN116051713B (en) * 2022-08-04 2023-10-31 荣耀终端有限公司 Rendering method, electronic device and computer-readable storage medium
CN117793442A (en) * 2023-12-29 2024-03-29 深圳市木愚科技有限公司 Image video masking method, device, equipment and medium based on point set
CN117793442B (en) * 2023-12-29 2024-07-09 深圳市木愚科技有限公司 Image video masking method, device, equipment and medium based on point set

Also Published As

Publication number Publication date
CN112734896B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
CN111915712B (en) Illumination rendering method and device, computer readable medium and electronic equipment
CN111882631B (en) Model rendering method, device, equipment and storage medium
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
CN111798554A (en) Rendering parameter determination method, device, equipment and storage medium
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN115937389A (en) Shadow rendering method, device, storage medium and electronic equipment
CN114832375A (en) Ambient light shielding processing method, device and equipment
CN116228956A (en) Shadow rendering method, device, equipment and medium
CN113332714B (en) Light supplementing method and device for game model, storage medium and computer equipment
CN112465941B (en) Volume cloud processing method and device, electronic equipment and storage medium
CN114170368A (en) Method and system for rendering quadrilateral wire frame of model and model rendering equipment
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN107452046B (en) Texture processing method, device and equipment of three-dimensional city model and readable medium
CN116310041A (en) Rendering method and device of internal structure effect, electronic equipment and storage medium
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
CN115861503A (en) Rendering method, device and equipment of virtual object and storage medium
CN114693780A (en) Image processing method, device, equipment, storage medium and program product
US20240153159A1 (en) Method, apparatus, electronic device and storage medium for controlling based on extended reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant