[go: up one dir, main page]

CN115131482B - Method, device and equipment for rendering lighting information in game scenes - Google Patents

Method, device and equipment for rendering lighting information in game scenes Download PDF

Info

Publication number
CN115131482B
CN115131482B CN202210325539.9A CN202210325539A CN115131482B CN 115131482 B CN115131482 B CN 115131482B CN 202210325539 A CN202210325539 A CN 202210325539A CN 115131482 B CN115131482 B CN 115131482B
Authority
CN
China
Prior art keywords
information
voxels
illumination
game scene
lighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210325539.9A
Other languages
Chinese (zh)
Other versions
CN115131482A (en
Inventor
乔磊
冯星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202210325539.9A priority Critical patent/CN115131482B/en
Publication of CN115131482A publication Critical patent/CN115131482A/en
Application granted granted Critical
Publication of CN115131482B publication Critical patent/CN115131482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了一种游戏场景中光照信息的渲染方法、装置及设备,涉及3D渲染技术领域,能够自适应布局光照探针,降低游戏场景中光照信息采样的更新、传输和存储成本,提高光照信息的渲染效率。其中方法包括:利用游戏场景中空间区域的数据结构对空间区域进行切分,提取包含有物体的空间体素,数据结构为包含有多个层级的网格数据;遍历多个层级的网格数据,针对包含有物体的空间体素设置有效光照探针,得到空间区域的第一光照探针网格;针对未包含物体的空间体素补充设置虚拟光照探针,得到空间区域的第二光照探针网格;将第二光照探针网格所采集的光照信息传输至纹理资源信息中,并根据纹理资源信息对游戏场景中的光照信息进行渲染。

The present application discloses a method, device and equipment for rendering lighting information in a game scene, which relates to the field of 3D rendering technology, and can adaptively layout lighting probes, reduce the update, transmission and storage costs of lighting information sampling in the game scene, and improve the rendering efficiency of lighting information. The method includes: using the data structure of the spatial region in the game scene to segment the spatial region, extracting the spatial voxels containing objects, and the data structure is a grid data containing multiple levels; traversing the grid data of multiple levels, setting effective lighting probes for the spatial voxels containing objects, and obtaining the first lighting probe grid of the spatial region; supplementing the setting of virtual lighting probes for the spatial voxels that do not contain objects, and obtaining the second lighting probe grid of the spatial region; transmitting the lighting information collected by the second lighting probe grid to the texture resource information, and rendering the lighting information in the game scene according to the texture resource information.

Description

Method, device and equipment for rendering illumination information in game scene
Technical Field
The application relates to the technical field of 3D rendering, in particular to a method, a device and equipment for rendering illumination information in a game scene.
Background
With the rise of the game industry, many 3D games need scenes, rendering scenes are established, and the game scenes can be visualized by adding physical effects, so that the game effect of a player is improved. Because the game scene is a virtual world described by adopting a computer technology, the virtual world is similar to the real world, the scene contains light, objects and light rays emitted by the objects in the game and a scene light source, reflection or refraction phenomena occur, in order to improve the reality of the game scene, global illumination is generally used for rendering, the global illumination calculates ejection of the light rays between the surfaces of the objects after the light rays are emitted from the light source through a series of complex algorithms, and accurate simulation is generally realized during operation, so that the calculation cost is high.
In an actual development scene, for a global illumination effect of a dynamic object, illumination information is generally sampled pixel by using illumination probe collectors distributed in a game scene, and the illumination probe collectors serve as position points in space and can store light samples from all directions so as to form the illumination information in the game scene. However, a more accurate illumination probe is needed to be used in a mode of sampling illumination information by taking pixels as units, the number of the illumination probes is determined by the number of pixel textures in a game scene, and a large number of pixel textures need GPU hardware to rapidly sample millions of illumination probes per frame number to a certain extent, so that the updating, transmitting and storing cost of illumination information sampling in the game scene is higher, and the rendering efficiency of the illumination information is affected.
Disclosure of Invention
In view of the above, the application provides a method, a device and equipment for rendering illumination information in a game scene, which mainly aims to solve the problems that the cost for updating, transmitting and storing illumination information samples in the game scene in the prior art is high and the rendering efficiency of the illumination information is affected.
According to a first aspect of the present application, there is provided a method for rendering illumination information in a game scene, comprising:
segmenting a space region in a game scene by utilizing a data structure of the space region, and extracting space voxels containing objects, wherein the data structure is grid data containing a plurality of layers;
Traversing the grid data of the multiple layers, and setting an effective illumination probe aiming at the space voxels containing the object to obtain a first illumination probe grid of a space region, wherein the first illumination probe grid comprises an effective illumination probe;
a virtual illumination probe is complementarily arranged for the space voxels which do not contain the object, so as to obtain a second illumination probe grid of the space region, wherein the second illumination probe grid comprises an effective illumination probe and a virtual illumination probe;
transmitting the illumination information acquired by the second illumination probe grid to texture resource information, and rendering the illumination information in the game scene according to the texture resource information.
The method comprises the steps of dividing a space region into a plurality of layers by using the tree structure formed by the space region to form a plurality of layers of space voxels, judging whether the space voxels in the divided layers contain objects in the dividing process of the multi-layer space region, and if so, dividing the space voxels in the divided layers into the next layers until the number of layers reaches a preset threshold value, and extracting the space voxels containing the objects.
Further, setting an effective illumination probe for the space voxels containing the objects to obtain a first illumination probe grid of the space region, and specifically comprising acquiring grid positions of the space voxels in grid data of a plurality of levels for the space voxels containing the objects, and setting the effective illumination probe at vertexes in the first grid positions to obtain the first illumination probe grid of the space region.
The method comprises the steps of obtaining a first illumination probe grid of a space area by traversing grid data of a plurality of layers, obtaining a first grid position of the space voxel in the grid data of the layers according to the space voxel which does not contain an object, judging whether four vertexes of the grid position are provided with effective illumination probes, and if not, obtaining the first illumination probe grid of the space area by additionally arranging the virtual illumination probes according to vertexes of the grid position which are not provided with the effective illumination probes.
Further, before the illumination information acquired by the second illumination probe grid is transmitted to texture resource information and the illumination information in the game scene is rendered according to the texture resource information, the method further comprises the steps of extracting a spatial logic relation mapped by the illumination information in the second illumination probe grid and storing the spatial logic relation into the texture resource information.
Further, the rendering of illumination information in a game scene according to the texture resource information specifically comprises the steps of expanding the illumination information in the texture resource information into a three-dimensional block data set according to the spatial logic relationship, and recording hierarchical relationships among the three-dimensional block data sets by utilizing indirect textures; combining the three-dimensional block data sets by utilizing the hierarchical relation among the three-dimensional block data sets in the indirect texture to form three-dimensional block texture information of a tree structure, wherein the three-dimensional block texture information is recorded with the space position of illumination information in a game scene; and according to the space position of the viewpoint position in the game scene, reading illumination information of the corresponding space position from the three-dimensional block texture information of the tree structure to render.
The method comprises the steps of obtaining a space logical relation between a first illumination probe grid and a second illumination probe grid, obtaining illumination information in the texture resource information, expanding the illumination information in the texture resource information into three-dimensional partitioned data according to the space logical relation, recording the hierarchical relation between each three-dimensional partitioned data set by using indirect textures, extracting the hierarchical distribution of the effective illumination probe and the virtual illumination probe in the second illumination probe grid according to the space logical relation, expanding the illumination information in the texture resource information into the three-dimensional partitioned data according to the hierarchical distribution of the effective illumination probe and the virtual illumination probe in the second illumination probe grid, and recording the hierarchical relation between the three-dimensional partitioned data sets by using indirect textures.
Further, according to the space position of the viewpoint position in the game scene, reading illumination information of the corresponding space position from the three-dimensional block texture information of the tree structure for rendering, wherein the method specifically comprises the steps of obtaining indirect textures representing the hierarchical relationship between three-dimensional block data sets from the three-dimensional block texture information of the tree structure according to the space position of the viewpoint position in the game scene; and reading illumination information of corresponding spatial positions in the three-dimensional block texture information of the tree structure to render by utilizing the indirect textures representing the hierarchical relation between the three-dimensional block data sets.
According to a second aspect of the present application, there is provided a method for rendering illumination information in a game scene, comprising:
obtaining a close-range voxel contained in a space voxel in a game scene, wherein the close-range voxel is a voxel which meets a distance condition in voxels of a preset level formed by dividing the space voxel in the game scene, and the distance condition is that a bounding box of the voxel intersects with an object bounding box in the game scene;
creating an effective illumination probe and a virtual illumination probe for the close-range voxels contained in the space voxels, and generating a probe grid of the space voxels, wherein the probe grid is used for capturing illumination information in a game scene;
combining and storing illumination information captured by the probe grid of the space voxel into texture resource information according to the viewpoint position in the game scene;
And responding to a rendering instruction of the illumination information, establishing a rendering task by utilizing the texture resource information, and rendering the illumination information in the game scene.
The method comprises the steps of obtaining a space region covered by an object to be hung in a game scene, dividing each space voxel in the space region into voxels of a preset level, traversing each voxel in the preset level, judging that a bounding box of the divided voxels intersects with an object bounding box in the game scene, and if yes, determining that the divided voxels are close-range voxels of the object surface in the game scene.
Further, creating an effective illumination probe and a virtual illumination probe for the close-range voxels contained in the space voxels, and generating a probe grid of the space voxels, wherein the method specifically comprises the steps of creating the effective illumination probe for the close-range voxels in the hierarchy in the space voxels; and adding a virtual voxel corresponding to the hierarchy for the space voxel, and creating a virtual illumination probe for voxels conforming to the addition condition in the virtual voxels, wherein the virtual illumination probe is used for performing seamless interpolation on illumination data sampled by close-range voxels in the hierarchy in the space voxel.
Further, the virtual voxels corresponding to the layers are added for the space voxels, and a virtual illumination probe is created for the voxels which meet the addition condition in the virtual voxels, specifically comprising the steps of adding the virtual voxels corresponding to the layers for the layers which are larger than the first level in the space voxels, wherein the virtual voxels are mapped with the voxels corresponding to the layers in the space voxels; and traversing the virtual voxels corresponding to the hierarchy, judging whether the phase mapping voxels in the space voxels have effective illumination probes, and if not, creating the virtual illumination probes for the virtual voxels.
Further, before the illumination information captured by the probe grid of the space voxel is merged and stored into texture resource information according to the viewpoint position in the game scene, the method further comprises the steps of expanding the illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the viewpoint position in the game scene to form three-dimensional block texture information of a multi-level tree structure;
the method comprises the steps of combining and storing illumination information captured by a probe grid of the space voxel into texture resource information according to viewpoint positions in a game scene, and specifically comprises the step of combining and storing the three-dimensional block texture information and the indirect texture information into texture resource information.
The method comprises the steps of obtaining illumination data in a game scene by utilizing an effective illumination probe in the probe grid, carrying out interpolation operation on the illumination data to obtain first illumination information of the view point position in the game scene, carrying out seamless interpolation on the illumination data sampled by a close-range voxel in a hierarchy in the space voxel by utilizing a virtual illumination probe in the probe grid to obtain second illumination information of the view point position in the game scene, and expanding the first illumination information and the second illumination information of the view point position in the game scene to the three-dimensional block data set to form three-dimensional block texture information of the multi-level tree structure.
The method comprises the steps of establishing a rendering task by utilizing texture resource information, and rendering illumination information in a game scene, wherein the method comprises the steps of establishing the rendering task by utilizing the texture resource information, obtaining a hierarchical relation of three-dimensional block texture information mapping from indirect texture information, inquiring position information of the three-dimensional texture information in a tree structure according to the hierarchical relation of the three-dimensional block texture information mapping, and rendering the illumination information captured by a probe grid for sampling the space voxels from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure.
Further, the method for inquiring the position information of the three-dimensional texture information in the tree structure according to the hierarchical relation of the three-dimensional block texture information mapping specifically comprises the steps of extracting the hierarchical level and the offset of the three-dimensional block texture information in the tree structure according to the hierarchical relation of the three-dimensional block texture information mapping, and calculating the position information of the three-dimensional texture information in the tree structure according to the hierarchical level and the offset of the three-dimensional block texture information in the tree structure.
According to a third aspect of the present application, there is provided a rendering device of illumination information in a game scene, comprising:
The extraction unit is used for segmenting the space region by utilizing a data structure of the space region in the game scene, and extracting space voxels containing objects, wherein the data structure is grid data containing a plurality of levels;
The first setting unit is used for traversing the grid data of the multiple layers, setting an effective illumination probe aiming at the space voxels containing the object to obtain a first illumination probe grid of the space region, wherein the first illumination probe grid comprises an effective illumination probe;
The second setting unit is used for complementarily setting a virtual illumination probe for the space voxels which do not contain the object to obtain a second illumination probe grid of the space region, wherein the second illumination probe grid comprises an effective illumination probe and a virtual illumination probe;
The first rendering unit is used for transmitting the illumination information acquired by the second illumination probe grid to texture resource information and rendering the illumination information in the game scene according to the texture resource information.
The data structure is a tree structure formed by the space regions, the extraction unit comprises a segmentation module, a first judgment module and an extraction module, wherein the segmentation module is used for carrying out multi-level segmentation on the space regions by utilizing the tree structure formed by the space regions to form a plurality of levels of space voxels, the first judgment module is used for judging whether the space voxels in the segmented levels contain objects in the segmentation process of the multi-level space regions, and the extraction module is used for carrying out next-level segmentation on the space voxels in the segmented levels until the number of the levels reaches a preset threshold value and extracting the space voxels containing the objects if the space voxels in the segmented levels contain the objects.
Further, the first setting unit comprises a first acquisition module and a first setting module, wherein the first acquisition module is used for acquiring grid positions of spatial voxels containing objects in grid data of a plurality of levels, and the first setting module is used for setting effective illumination probes at vertexes in the first grid positions to obtain a first illumination probe grid of a spatial region.
Further, the second setting unit comprises a second obtaining module, a second judging module and a second setting module, wherein the second obtaining module is used for traversing the grid data of the multiple layers, obtaining a second grid position of the spatial voxel in the grid data of the multiple layers according to the spatial voxel of the object which is not contained in each layer, the second judging module is used for judging whether valid illumination probes are arranged at four vertexes of the grid position, and the second setting module is used for complementarily setting virtual illumination probes according to vertexes of the grid position, where the valid illumination probes are not arranged, to obtain a second illumination probe grid of the spatial region.
The device further comprises an extraction unit, a processing unit and a processing unit, wherein the extraction unit is used for extracting a spatial logic relation mapped by illumination information in the second illumination probe grid before the illumination information acquired by the second illumination probe grid is transmitted to texture resource information and the illumination information in a game scene is rendered according to the texture resource information, and storing the spatial logic relation into the texture resource information.
The first rendering unit further comprises an unfolding module, a merging module and a reading module, wherein the unfolding module is used for unfolding illumination information in the texture resource information into a three-dimensional block data set according to the spatial logic relation, and recording the hierarchical relation among the three-dimensional block data sets by using indirect textures, the merging module is used for merging the three-dimensional block data sets by using the hierarchical relation among the three-dimensional block data sets in the indirect textures to form three-dimensional block texture information of a tree structure, the spatial position of the illumination information in a game scene is recorded in the three-dimensional block texture information, and the reading module is used for reading the illumination information of the corresponding spatial position from the three-dimensional block texture information of the tree structure according to the spatial position of the viewpoint position in the game scene for rendering.
The unfolding module is specifically configured to extract a hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid according to the spatial logic relationship, and is specifically configured to unfold illumination information in the texture resource information into three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and record a hierarchical relationship between three-dimensional block data sets by using indirect textures.
Further, the reading module is specifically configured to obtain, according to a spatial position of the viewpoint position in the game scene, an indirect texture that characterizes a hierarchical relationship between three-dimensional block datasets from three-dimensional block texture information of the tree structure; the reading module is specifically further configured to read illumination information of a corresponding spatial position in the three-dimensional block texture information of the tree structure by using the indirect texture representing the hierarchical relationship between the three-dimensional block data sets to render.
According to a fourth aspect of the present application, there is provided a rendering device of illumination information in a game scene, comprising:
The acquisition unit is used for acquiring a close-range voxel contained in a space voxel in the game scene, wherein the close-range voxel is a voxel which meets a distance condition in voxels of a preset level formed by the division of the space voxel in the game scene, and the distance condition is that a bounding box of the voxel intersects with an object bounding box in the game scene;
the creating unit is used for creating an effective illumination probe and a virtual illumination probe for the close-range voxels contained in the space voxels, and generating a probe grid of the space voxels, wherein the probe grid is used for capturing illumination information in a game scene;
The storage unit is used for merging and storing illumination information captured by the probe grid of the space voxel into texture resource information according to the viewpoint position in the game scene;
And the second rendering unit is used for responding to the rendering instruction of the illumination information, establishing a rendering task by utilizing the texture resource information and rendering the illumination information in the game scene.
The acquisition unit further comprises a segmentation module used for acquiring a space area covered by an object to be hung in the game scene, segmenting each space voxel in the space area into voxels of a preset level, a third judgment module used for traversing each voxel in the preset level and judging that a bounding box of the segmented voxel intersects with an object bounding box in the game scene, and a determination module used for determining that the segmented voxel is a close-range voxel of the object surface in the game scene if the segmented voxel is the close-range voxel.
The creating unit further comprises a creating module used for creating an effective illumination probe for the close-range voxels in the hierarchy in the space voxels, and an adding module used for adding virtual voxels corresponding to the hierarchy in the space voxels, creating a virtual illumination probe for the voxels conforming to the adding conditions in the virtual voxels, and the virtual illumination probe is used for carrying out seamless interpolation on illumination data sampled by the close-range voxels in the hierarchy in the space voxels.
The adding module further comprises an adding submodule, a judging submodule and a creating submodule, wherein the adding submodule is used for adding virtual voxels corresponding to the levels larger than the first level in the space voxels, the virtual voxels are mapped with the voxels corresponding to the levels in the space voxels, the judging submodule is used for traversing the virtual voxels corresponding to the levels and judging whether effective illumination probes exist in the phase mapping voxels in the space voxels, and the creating submodule is used for creating the virtual illumination probes for the virtual voxels if the effective illumination probes exist in the phase mapping voxels in the space voxels.
The device further comprises an unfolding unit used for unfolding the illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the viewpoint position in the game scene before the illumination information captured by the probe grid of the space voxel is merged and stored in texture resource information according to the viewpoint position in the game scene, a recording unit used for recording the hierarchical relation mapped by the three-dimensional block data set to indirect texture information in the process of unfolding the three-dimensional block data set, and a storage unit used for merging and storing the three-dimensional block texture information and the indirect texture information to texture resource information.
The unfolding unit further comprises an operation module for sampling illumination data in the game scene by using the effective illumination probes in the probe grid and carrying out interpolation operation on the illumination data to obtain first illumination information of the viewpoint positions in the game scene, an interpolation module for carrying out seamless interpolation on the illumination data sampled by the close-range voxels in the space voxels in the hierarchy by using the virtual illumination probes in the probe grid to obtain second illumination information of the viewpoint positions in the game scene, and an unfolding module for unfolding the first illumination information and the second illumination information of the viewpoint positions in the game scene to a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
Further, the second rendering unit comprises an acquisition module for establishing a rendering task by utilizing the texture resource information and acquiring a hierarchical relation of three-dimensional block texture information mapping from the indirect texture information, a query module for querying position information of the three-dimensional texture information in a tree structure according to the hierarchical relation of the three-dimensional block texture information mapping, and a sampling module for sampling illumination information captured by a probe grid of the space voxel from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure and rendering the illumination information.
The query module further comprises an extraction submodule for extracting the level and the offset of the three-dimensional block texture information in the tree structure according to the level relation of the three-dimensional block texture information mapping, and a calculation submodule for calculating the position information of the three-dimensional block texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
According to a fifth aspect of the present application there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect described above when the computer program is executed by the processor.
According to a sixth aspect of the present application there is provided a readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of the first aspect described above.
In a game scene, illumination is a factor with larger influence and is a part of irreducible visual styles, under normal conditions, static and dynamic objects in the game scene have the possibility of different sizes or complex model structures, the art assets are difficult to bake into an effective illumination map, and the illumination probe for pixel-by-pixel sampling can avoid the illumination effect of a moving object and the uncooled sense of the whole scene using the static illumination map, can bring unified indirect illumination to the scene, simplify the complexity of a rendering pipeline and improve the rendering efficiency. In the operation process of the illumination probes, illumination information can be sampled aiming at the position point of one illumination probe, then the illumination information is sampled from the positions of other illumination probes adjacent to the illumination probe, and interpolation operation is carried out on the illumination information obtained by sampling, so that the illumination information of one position between the illumination probes is calculated.
As a mode for collecting illumination information by the illumination probe, the method can sample objects in a game scene, specifically comprises object-by-object corresponding parameter sampling and object-by-object 3D texture sampling, illumination change depends on object surface normals in the object-by-object corresponding parameter sampling process, the normals of the object surfaces are needed to be utilized in a resolving stage, the condition that illumination mismatch and discontinuity occur with an adjacent large model is possible to happen, so that one-to-one collection, updating and sampling are carried out at a CPU end, the SH coefficient (SPHERICAL HARMONIC, spherical surface harmonic function) corresponding to each relevant object after interpolation is relatively easy to maintain and low in cost, and in the object-by-object 3D texture sampling process, the hardware acceleration function is utilized to conduct sampling interpolation calculation at a GPU end, so that the sampling effect of the illumination information is improved to a certain extent compared with the object-by-object corresponding parameter sampling mode.
As another way of collecting illumination information of illumination probes, sampling can be carried out on pixel by pixel in a game scene, because screen pixels are fixed, a way of collecting illumination information in object units is replaced by pixel units, more accurate illumination probes can be provided, and when linear interpolation is carried out among a plurality of illumination probes in a pixel or calculation shader, GPU hardware can be used for rapidly sampling illumination information captured by millions of illumination probes per frame.
By means of the technical scheme, compared with the existing mode of sampling illumination information by object, the method, the device and the equipment for rendering illumination information in the game scene can provide indirect illumination for a large number of complex objects by using the mode of sampling illumination information by pixel, are suitable for various illumination acquisition scenes, and avoid the update cost of 3D textures from a CPU end to a GPU end and the total number of objects which are limited by transmission bandwidth and sampled by object. Compared with the prior art that illumination information is sampled pixel by pixel, the method and the system have the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are the voxels close to the surface of the object, a larger number of illumination probes are needed, effective illumination probes and virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be adaptively distributed, the virtual illumination probes are used as auxiliary effective illumination probes, seamless interpolation is provided for the game scene, illumination information captured by the probe grids of the space voxels is combined and stored in texture resource information according to viewpoint positions in the game scene, the texture resource information can cope with illumination information sampling of different game scenes, so that the same-screen pixels can sample interpolation results of different surface densities according to different spatial positions, further, when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, three-dimensional texture addresses of the game scene are cached, the number of the pixel texture probes in the game scene is determined, the number of the illumination probes is greatly increased, the illumination information of the game scene is saved, the illumination information is saved by the aid of the texture probes, the image processing efficiency is greatly, the illumination information is saved, the illumination information is required to be adaptively transmitted by the GPU, the illumination information is saved, and the image processing cost is reduced, and the illumination information is required to be sampled by the image processing, and the image processing cost is greatly is reduced.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a flow chart illustrating a method for rendering illumination information in a game scene according to an embodiment of the present application;
FIGS. 2a-2c are schematic diagrams illustrating a process of creating an illumination probe in a game scene according to an embodiment of the present application;
fig. 3 is a flowchart illustrating another method for rendering illumination information in a game scene according to an embodiment of the present application;
fig. 4 is a flowchart illustrating another method for rendering illumination information in a game scene according to an embodiment of the present application;
FIGS. 5a-5b are schematic diagrams illustrating a rendering process of illumination information in a game scene according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a rendering device for illumination information in a game scene according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a rendering device for illumination information in another game scene according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a rendering device for illumination information in another game scene according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a rendering device for illumination information in another game scene according to an embodiment of the present application;
fig. 10 shows a schematic device structure of a computer device according to an embodiment of the present invention.
Detailed Description
The present disclosure will now be discussed with reference to several exemplary embodiments. It should be understood that these embodiments are discussed only to enable those of ordinary skill in the art to better understand and thus practice the teachings of the present invention, and are not meant to imply any limitation on the scope of the invention.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The term "based on" is to be interpreted as "based at least in part on". The terms "one embodiment" and "an embodiment" are to be interpreted as "at least one embodiment. The term "another embodiment" is to be interpreted as "at least one other embodiment".
The embodiment provides a method for rendering illumination information in a game scene, as shown in fig. 1, the method is applied to a client of a scene rendering tool, and comprises the following steps:
101. And segmenting the space region by utilizing a data structure of the space region in the game scene, and extracting the space voxels containing the object.
102. And traversing the grid data of the multiple layers, and setting an effective illumination probe aiming at the space voxels containing the object to obtain a first illumination probe grid of the space region.
103. And complementarily setting a virtual illumination probe for the space voxels which do not contain the object to obtain a second illumination probe grid of the space region.
104. Transmitting the illumination information acquired by the second illumination probe grid to texture resource information, and rendering the illumination information in the game scene according to the texture resource information.
According to the method for rendering illumination information in the game scene, the space region is segmented by utilizing the data structure of the space region in the game scene, the space voxels containing objects are extracted, the data structure comprises grid data of a plurality of levels, each level comprises a corresponding level number of space voxels, the relative space position relation between the space voxels and the objects in the space region can be formed by utilizing the grid data, so that the space position with strong illumination variation can be obtained, an effective illumination probe is arranged for the space voxels containing the objects, a first illumination probe grid is obtained, the illumination probe can be adaptively arranged for the space position with strong illumination variation, grid data of the levels are traversed, a virtual illumination probe is additionally arranged for the space voxels not containing the objects, a second illumination probe grid of the space region is obtained, so that the space voxels among different levels can be effectively interpolated and transited in the illumination information sampling process, the position with uneven distribution in the game scene can be provided for more effective illumination information sampling results, the illumination information acquired by the second illumination probe is transmitted to texture information, the texture information is effectively used for the space voxels, the illumination information acquired by the texture information is effectively, the illumination information is effectively represented by the illumination probe according to the illumination resource, the illumination information is not required to be fully rendered, and the illumination information is effectively-enriched in the illumination information is fully-required by the illumination resource, and the illumination condition is not required to be fully rendered.
The data structure of the space region in the embodiment of the invention can be a tree structure formed by the space region, and can be a quadtree, an octree, a hexadecimal tree and the like, and particularly in the process of splitting the space region, the space region is split in multiple layers by utilizing the tree structure formed by the space region to form space voxels of multiple layers, and in the process of splitting the space region of the layers, whether the space voxels in the layers after splitting contain objects is judged, if yes, the next-layer splitting is performed on the space voxels in the layers after splitting until the number of the layers reaches a preset threshold value, and the space voxels containing the objects are extracted, wherein the objects refer to the space voxels close to the surface of the objects.
It will be appreciated that in the segmentation of the spatial region, a large voxel is first used as the starting point, where the large voxel is used as the spatial region from which the segmentation was originally performed, may be set based on a hierarchy of minimum spatial voxels, e.g., minimum spatial voxels 1 x1, the number of the layers of the space region is 3, the space region is 8 x 8, and the space region can be set in a self-defining mode. Then judging whether the space region contains an object or not before segmenting the large voxel, if yes, segmenting the large voxel, otherwise, not segmenting the large voxel, further respectively judging whether an object is in each sub-voxel according to two sub-voxels formed after segmentation, and if the object is the same, segmenting the sub-voxels, otherwise, not segmenting the sub-voxels, and forming a space voxel of one level after each voxel segmentation, wherein the size of the space voxel continuously decreases along with the increase of the level, for example, the space voxel of the first level, namely the large space voxel at the beginning, corresponds to the largest voxel, and the space voxel of the highest level, namely the space voxel corresponding to the space region after segmentation, corresponds to the smallest voxel, and the size of the voxel is doubled and reduced without adding one level, finally, grid data of a plurality of layers are formed after each space region is segmented, but the grid data is not uniform in layer distribution, only the space region around the object is segmented, and the layer distribution of the segmented space voxels is also not uniform, for example, for a space region of 8 x 8, the space region contains the position of the object at the upper left corner and is positioned at a second layer, 4 x 4 space voxels are formed after the first segmentation, namely, the space voxels of the first layer, for the space voxels of the first layer, only the space voxels containing the object are segmented into the space voxels of the second layer, namely, the other 3 space voxels positioned at the first layer are not segmented, 4 space voxels of 2x 2 are formed after the second segmentation, and similarly, for the space voxels of the second layer, only the spatial voxels containing the object are segmented into spatial voxels of the third level, and if all 4 spatial voxels on the second level contain the object, the spatial voxels of the third level formed after the segmentation of the 4 spatial voxels, i.e. each spatial voxel of the second level, form 41 x1 spatial voxels.
Because the level distribution in the grid data is uneven, in order to distribute the illumination probes with different densities in the game scene, the grid data of a plurality of levels can be traversed, the grid positions of the space voxels in the grid data of the plurality of levels are obtained for the space voxels containing the object, the effective illumination probes are arranged at the four vertexes of the grid positions, the first illumination probe grid of the space region is obtained, the space voxels are in different levels, repeated vertexes possibly appear, for example, the positions of the four vertexes of the space region are only in the first level, or in the second level or the third level in the segmentation process, the grid positions of the space voxels in the grid data of the plurality of levels are obtained, whether the effective illumination probes are arranged at the four vertexes of the grid positions is judged, if not, the virtual illumination probes are additionally arranged at the vertexes of the grid positions, the second illumination probe grid of the space region is obtained, so that the grid data of the level are filled with the effective illumination probes and the virtual illumination probes, for the purpose of using the virtual illumination probes, the virtual illumination probes is realized, the subsequent illumination probes can be used for seamlessly sampling the illumination information, and the subsequent illumination information is sampled in order to improve the illumination process.
Taking a 2D quadtree as a specific application scene for example, as shown in fig. 2a-2c, firstly taking the space voxels formed by the space region as the space voxels of the first level as shown in fig. 2a, then cutting the space voxels of the first level to form 4 space voxels of the second level, further judging whether the space voxels of each second level contain objects, if yes, cutting the space voxels of the second level until reaching the space voxels of the third level, secondly, adaptively setting illumination probes (effective illumination probes) in the space voxels until reaching the space voxels of the third level, as shown in fig. 2b, taking a solid circle as an effective illumination probe, putting the effective illumination probes at four corners of the space voxels of the first level as the space voxels contain objects, and putting the effective illumination probes at four corners of the space voxels of the first level as the space voxels of the second level, wherein if the effective illumination probes are positioned at the same position, and the virtual illumination probes are not positioned at the same position as the virtual illumination probes, and the virtual illumination probes are not positioned at the virtual intersection point, and the virtual intersection point is not formed by the virtual illumination probe, if the virtual intersection point is not positioned at the virtual intersection point, and the virtual intersection point is formed by the virtual illumination probe is not positioned at the virtual intersection point, and the virtual intersection point is formed by the virtual intersection point.
Further, as the acquisition process of the illumination information is influenced by the illumination detection positions in the illumination probe grid, the illumination information of different space positions is also influenced by each other, the space logic relation mapped by the illumination information in the second illumination probe grid is extracted and stored in the texture resource information before the illumination information is rendered, then the illumination information in the texture resource information is unfolded into the three-dimensional block data set according to the space logic relation in the process of rendering the illumination information, the hierarchical relation among the three-dimensional block data sets is recorded by utilizing the indirect texture, the three-dimensional block texture information of the tree structure is formed by further utilizing the hierarchical relation among the three-dimensional block data sets in the indirect texture, the space position of the illumination information in a game scene is recorded in the three-dimensional block texture information, the viewpoint position is equivalent to the space position corresponding to the view angle of video equipment according to the space position of the viewpoint, the space position is taken as the illumination acquisition position, the illumination information of the corresponding space position is read from the three-dimensional block texture information of the three-dimensional block structure, the space relation is stored in advance, the space relation is not required to be used, and the large-scale storage and the cost of the data is saved, and the data is transmitted simultaneously, and the data is rendered and the accuracy is guaranteed.
Further, considering the distribution condition of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, in the process of expanding illumination information in texture resource information into three-dimensional block data sets according to a space logic relationship and recording the hierarchical relationship between each three-dimensional block data set by using indirect textures, firstly extracting the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid according to the space logic relationship, then expanding illumination information in the texture resource information into the three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and recording the hierarchical relationship between the three-dimensional block data sets by using indirect textures. Because indirect texture is not direct texture information, a large amount of texture information is not required to be transmitted in the process of transmitting texture resource information, so that the transmission efficiency and sampling efficiency of illumination information in the subsequent rendering process are improved.
Further, in order to accurately acquire actual illumination information in the game world, in the process of reading illumination information of corresponding spatial positions from three-dimensional block texture information of a tree structure according to the spatial positions of viewpoint positions in the game scene for rendering, firstly, according to the spatial positions of the viewpoint positions in the game scene, indirect textures representing the hierarchical relationship between three-dimensional block data sets are acquired from the three-dimensional block texture information of the tree structure, and then, the illumination information of the corresponding spatial positions in the three-dimensional block texture information of the tree structure is read for rendering by utilizing the indirect textures representing the hierarchical relationship between the three-dimensional block data sets. Because the indirect texture stores the level and the offset of the three-dimensional block data in the tree structure, the illumination information acquired by the effective illumination probe can be acquired by sampling the corresponding level and the offset in the indirect texture according to the world space position of the screen pixel.
The embodiment provides another method for rendering illumination information in a game scene, as shown in fig. 3, the method is applied to a client of a scene rendering tool, and includes the following steps:
201. a close-range voxel contained in a spatial voxel in the game scene is acquired.
The game is real-time, dynamic and interactive computer simulation, many three-dimensional games use three-dimensional triangular grids to express the object surface, the detail layers of the expressed object surface are stored in textures, when coloring is carried out, the object with the intersecting light rays needs to be considered firstly, then multi-level textures of the detail layers corresponding to the object surface are selected for coloring calculation, a space voxel is used as a volume unit for rendering the textures in the game space, and an object containing the space voxel can be represented by three-dimensional rendering or extracting a polygon isosurface with a given threshold contour.
For illumination effects related to game scenes, global illumination is generally used for rendering, direct light and diffuse reflection effects as much as possible are considered by the global illumination, and finally, the presented shadow effect is closer to the real world. The specific global illumination refers to calculation of reflection of light rays around a game scene, and is responsible for making metal reflection effects realizing a plurality of fine coloring special effects, atmospheres and gloss in the environment. In the existing global illumination mode, all indirect illumination is pre-calculated and stored in texture information of a ray map, and the ray map enables a game scene to have an effect similar to global illumination. The illumination probe can simulate the effect of using the illumination map aiming at the global illumination of a non-static object, can sample illumination information of a certain appointed point illuminated in the 3D space in a pre-calculation stage before operation, and then compiles and packages the collected information through spherical harmonics. These illumination information encodings can be quickly reconstructed out of the illumination effect by the shader program while the game is running. Similar to the illumination map, the illumination probe stores illumination information in the scene, but the illumination map stores illumination information of the light illuminating the surface of the object, and the illumination probe stores illumination information of the light passing through the vacuum area.
The space voxels in the game scene correspond to three-dimensional space units in the game world, the near-distance voxels are voxels meeting distance conditions in voxels of a preset level formed by dividing the space voxels in the game scene, the distance conditions are equivalent to voxels close to the surface of an object, the distance conditions serve as the basis for judging the voxels close to the surface of the object, bounding boxes of the voxels can be intersected with the bounding boxes of the object in the game scene, the pixels to be divided and the divided pixels can be judged according to the distance conditions through the fact that the space voxels can be formed in the dividing process, and if the bounding boxes of the voxels intersect with the bounding boxes of the object in the game scene, the voxels are indicated to be close to the surface of the object, namely the near-distance voxels.
In the embodiment of the application, for placing the illumination probes in the game world, the illumination probes are usually manually placed, the placing position is adjusted according to the rendering result of the actual illumination information, the time and space waste is serious, the probes cannot effectively cover along the object surface, the area of the object surface can be positioned from the game world by acquiring the close-range voxels contained in the space voxels in the game scene, and the illumination probes are adaptively arranged aiming at the area of the object surface, so that the illumination information with more environmental influence in the game scene is acquired, and the sampling efficiency of the illumination information is improved.
The execution body of the embodiment can be a rendering device or equipment of illumination information in a game scene, and the execution body can be configured at a client of a scene rendering tool, after the game scene is arranged, the position information of an illumination probe in the game scene is required to be arranged, because the illumination probe cannot be directly mounted on a game object, the execution body generally needs to depend on a designated space area in the game scene, when the illumination probe is added into the game scene, a close-range voxel contained in a space pixel in the game scene can be acquired aiming at the designated space area, the position of the close-range voxel in the game scene is used as a preferred position for placing the illumination probe, the illumination information which meets the requirements of the game scene better can be acquired, and the illumination rendering effect is improved.
202. And creating an effective illumination probe and a virtual illumination probe for the short-distance voxels contained in the space voxels, and generating a probe grid of the space voxels.
The virtual illumination probes are illumination collectors which are arranged at sampling points with real positions in the game world, can capture light rays from all directions at the sampling points, encode color information of the captured light rays into a group of coefficients which can be rapidly evaluated in the game running process, and because space pixels contain short-distance voxels of preset levels, in order to ensure interpolation effects of sampling illumination information of the effective illumination probes arranged between different levels, the virtual illumination probes can effectively interpolate and transition to form an added illumination collector when the auxiliary effective illumination probes are sampled by the GPU, so that seamless interpolation can be provided for sampling between voxels of a larger level and voxels of a smaller level in the game scene.
In the embodiment of the application, the probe grids of the space voxels are equivalent to grids of a tree structure, wherein the tree structure is a preset level formed by space voxel segmentation, for example, the octree structure corresponds to a preset level of 3, namely, the space voxels are segmented into 8 voxels, the corners of each voxel form 2 x 2 probe grids, in order to distribute illumination probes with different densities in the game world, and the total number of the creation is adaptively minimized, when the space voxels are segmented, close-range voxels are recorded, the close-range voxels have better illumination information sampling effect, and effective illumination probes are further created on the close-range voxels.
It can be appreciated that while the effective illumination probe can provide illumination information with interpolation effect, it still needs more than gigabytes of resources occupation, in order to be able to cope with various scale and types of game scenes and reduce the resources occupation, by adding a virtual illumination probe between different levels of space pixels, the virtual illumination probe can correlate the effective illumination probes in each level, and sample the sampling interpolation results with different densities according to the effective illumination probe in different space pixels in the game world, so as to reduce the resource expense of the illumination information collection process.
203. And merging and storing illumination information captured by the probe grid of the space voxel into texture resource information according to the viewpoint position in the game scene.
The method comprises the steps that a viewpoint position in a game scene is any sampling position of illumination information in the game scene, an effective illumination probe continuously collects illumination data changing in the game scene according to the viewpoint position, then in the interpolation process of the illumination data, sampling interpolation results with different densities are provided for the illumination data collected by the effective illumination probe by utilizing a virtual illumination probe, so that illumination information captured by a probe grid is formed and combined and stored in texture resource information.
It will be appreciated that the illumination information captured by the probe grid of spatial voxels comprises pre-computed illumination data, and that much of the budget overhead is generated when compiled, but not when running, the illumination probe will store the illumination data across the scene space and integrate it into texture resource information, which corresponds to the illumination map of the dynamic object, also comprising direct light sources projected onto the object surface within the scene, and indirect light sources reflected between different objects, the surface information and the relief information of the object being described by the colorants on the object material.
204. And responding to a rendering instruction of the illumination information, establishing a rendering task by utilizing the texture resource information, and rendering the illumination information in the game scene.
As the texture resource information can be used for accurately and currently using the illumination information in the game scene, a rendering task is further established by using the texture resource information, and the illumination information in the game scene is rendered. It will be appreciated that in the same manner, each scene data frame in the game scene will create a rendering task, and the scene space in the game scene is submitted to the rendering queue by the scene rendering tool once, and each renderable scene space, in addition to its own mesh, material, bounding box, its matrix in the game scene, and so on.
Compared with the prior art that illumination information is sampled by object, the method for rendering illumination information in the game scene can provide indirect illumination for a large number of complex objects, is suitable for various illumination acquisition scenes, and avoids the limitation of the update cost and the transmission bandwidth of 3D textures from a CPU end to a GPU end on the total number of objects sampled by object. Compared with the prior art that illumination information is sampled pixel by pixel, the method and the system have the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are the voxels close to the surface of the object, a larger number of illumination probes are needed, effective illumination probes and virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be adaptively distributed, the virtual illumination probes are used as auxiliary effective illumination probes, seamless interpolation is provided for the game scene, illumination information captured by the probe grids of the space voxels is combined and stored in texture resource information according to viewpoint positions in the game scene, the texture resource information can cope with illumination information sampling of different game scenes, so that the same-screen pixels can sample interpolation results of different surface densities according to different spatial positions, further, when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, three-dimensional texture addresses of the game scene are cached, the number of the pixel texture probes in the game scene is determined, the number of the illumination probes is greatly increased, the illumination information of the game scene is saved, the illumination information is saved by the aid of the texture probes, the image processing efficiency is greatly, the illumination information is saved, the illumination information is required to be adaptively transmitted by the GPU, the illumination information is saved, and the image processing cost is reduced, and the illumination information is required to be sampled by the image processing, and the image processing cost is greatly is reduced.
Further, as a refinement and extension of the specific implementation manner of the foregoing embodiment, in order to fully describe the specific implementation process of the present embodiment, the present embodiment provides another method for rendering illumination information in a game scene, as shown in fig. 4, where the method includes:
301. And acquiring a space region covered by an object to be hung in the game scene, and dividing each space voxel in the space region into voxels of a preset level.
The game scene consists of objects, some of which are solid, e.g. a brick, and some of which have no fixed shape, e.g. a strand of smoke, but all of which occupy the volume of three-dimensional space, and which may be opaque, i.e. light cannot pass through the object, or transparent, i.e. light can pass through the object. When an opaque object is rendered, only the surface of the opaque object needs to be considered, the internal part of the object does not need to be known, and when a transparent or semitransparent object is rendered, the behaviors such as reflection, refraction, scattering, absorption and the like caused when light passes through the object need to be considered, and the internal structure and attribute knowledge of the object need to be combined.
The light in the game scene can control the activity of the character, influence the mood of the player and influence the mode of sensing various events, and the game engine can be used as a tool for game development, so that various effects which are observed in real time in the process of game production, including intensity, color, shadow and the like, can be flexibly adjusted in the process of production. In general, for static objects in a game scene, global illumination may be used to bake an illumination map, and when an illumination map is baked, objects in the game scene calculate a result of the map based on the effect of light, and the result of the map is superimposed on objects in the game scene to create a lighting effect, where the illumination map may include direct light sources projected onto the surfaces of the objects in the scene, and indirect light sources reflected between different objects, and surface information and concave-convex information of the objects may be described by a shader on the material of the objects. Although the illumination map of the static object cannot change the illumination condition of the game scene when the game is executed, the pre-calculated real-time global illumination system can calculate complex scene light source interaction in real time, and the game environment with rich global illumination reflection can be established and the change of the light source can be reflected in real time through pre-calculating the global illumination. For dynamic objects in a game scene, sampling points of illumination probes can be arranged in a designated area to collect light and shade information of the designated area, the designated area can be a space area covered by an object to be hung in the game scene, and due to less illumination information generated in a place with small illumination change in the game scene, waste is generated when too many illumination probes are arranged, and dense illumination probes are arranged at illumination change, shadow positions and illumination transition areas at preferred positions. The corresponding preferred position can be selected for the space region covered by the to-be-hooked region, each space voxel in the space region is divided into voxels of a preset level, the preset level is a limiting depth of a tree structure, the limiting depth can be specifically set according to an actual application scene, and the higher the number of levels, the more the number of required illumination probes.
302. And traversing each voxel in the preset level, and judging that the bounding box of the segmented voxel intersects with the object bounding box in the game scene.
It can be understood that, in order to arrange more illumination probes at the spatial pixels close to the surface of the object, and arrange less illumination probes at the open spatial pixels, for each voxel in the preset hierarchy, the distance condition determination is performed on the voxel, where the bounding box of the segmented voxel is equivalent to the smallest hexahedron that encloses the segmented voxel and is parallel to the coordinate axis, the object bounding box in the game scene is equivalent to the smallest hexahedron that encloses the object and is parallel to the coordinate axis, the bounding box has a simple structure and small storage space, is not suitable for a complex virtual environment containing soft deformation, and by determining whether the bounding box of the segmented voxel intersects with the bounding box of the object in the game scene, whether the object exists in the spatial voxel, the spatial voxel corresponding to the object does not need to be segmented, and the illumination probes do not need to be arranged.
303. If yes, determining the segmented voxels as close-range voxels of the object surface in the game scene.
The above-mentioned determination process may be performed before and after the division of the spatial voxels, and repeatedly dividing the voxels conforming to the determination condition, and if the voxels do not conform to the determination condition, the division is not performed, first, starting with a large voxel as an original spatial voxel to be divided, and then uniformly dividing the same, where the division principle is that if the voxels are close to the object surface, the division is performed, and repeatedly dividing each sub-voxel until reaching a specified minimum voxel size, that is, a preset level, where the process generates a tree structure of the preset level.
304. And creating an effective illumination probe and a virtual illumination probe for the short-distance voxels contained in the space voxels, and generating a probe grid of the space voxels.
In the embodiment of the application, the effective illumination probe is used as the illumination probe arranged in the game scene, and is generally close to the object surface to acquire the illumination brightness condition of the object surface, specifically, the effective illumination probe can be created for the close-range voxels in the hierarchy in the space voxels, the virtual voxels corresponding to the hierarchy are added for the space voxels, the virtual illumination probe is created for the voxels conforming to the addition condition in the virtual voxels, and the virtual illumination probe is used for carrying out seamless interpolation on the illumination data sampled by the close-range voxels in the hierarchy in the space voxels.
Specifically, in the process of adding a virtual voxel corresponding to a level for a space voxel and creating a virtual illumination probe for a voxel conforming to an addition condition in the virtual voxel, a virtual voxel corresponding to a level greater than first-order in the space voxel is added, the virtual voxel is mapped with a voxel corresponding to the level in the space voxel, the virtual voxel corresponding to the level is further traversed, whether an effective illumination probe exists in the mapped voxel in the space voxel is judged, and if not, the virtual illumination probe is created for the virtual voxel. The virtual voxels and the virtual illumination probes can be displayed in different colors for voxels of different levels, and the virtual illumination probes are additionally built for the different levels, so that the virtual illumination probes are additionally built for places around the close-range voxels where the effective illumination probes are built and the effective illumination probes do not exist.
305. And expanding illumination information captured by the probe grid of the space voxel to a three-dimensional block data set according to the viewpoint position in the game scene to form three-dimensional block texture information of a multi-level tree structure.
In the embodiment of the application, the effective illumination probes in the probe grid can be used for sampling illumination data in the game scene, interpolation operation is carried out on the illumination data to obtain first illumination information of the viewpoint position in the game scene, the virtual illumination probes in the probe grid are used for carrying out seamless interpolation on the illumination data sampled by the close-range voxels in the space voxels in the hierarchy to obtain second illumination information of the viewpoint position in the game scene, and the first illumination information and the second illumination information of the viewpoint position in the game scene are further unfolded to a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
It should be noted that, the three-dimensional texture information of the tree structure combines the illumination information captured by the effective illumination probe and the virtual illumination probe, and can transmit the illumination information to the GPU to form the information of the cultural resources.
306. And in the process of expanding the three-dimensional block data set, recording the hierarchical relation mapped by the three-dimensional block data set into indirect texture information.
It should be noted that, in order to complete the storage of the three-dimensional block data set, it is also necessary to construct an indirect texture information including a hierarchical expansion representation by using a tree structure, and during operation, the indirect texture information is sampled, and the obtained content is the hierarchical relationship of the three-dimensional block data in the tree structure, so as to calculate the sampling position of the buffer illumination probe in the three-dimensional block data.
307. And merging the three-dimensional block texture information and the indirect texture information and storing the merged three-dimensional block texture information and the indirect texture information into texture resource information.
It will be appreciated that the texture resource information stores illumination information in each spatial voxel in the game scene, and then interpolation is performed between illumination information captured by baking to the nearest illumination probe, and illumination information located at any position in the spatial voxel is projected onto the moving object after estimation.
308. And responding to a rendering instruction of the illumination information, establishing a rendering task by utilizing the texture resource information, and rendering the illumination information in the game scene.
In the embodiment of the application, a rendering task can be established by utilizing texture resource information, a hierarchical relation of three-dimensional block texture information mapping is obtained from indirect texture information, then the position information of the three-dimensional texture information in a tree structure is queried according to the hierarchical relation of the three-dimensional block texture information mapping, finally the illumination information captured by a probe grid of a spatial voxel is sampled from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure, and the illumination information is rendered.
Specifically, in the process of inquiring the position information of the three-dimensional texture information in the tree structure according to the hierarchical relation of the three-dimensional block texture information mapping, the hierarchy and the offset of the three-dimensional block texture information in the tree structure can be extracted according to the hierarchical relation of the three-dimensional block texture information mapping, and the position information of the three-dimensional texture information in the tree structure can be calculated according to the hierarchy and the offset of the three-dimensional block texture information in the tree structure.
Taking a 2D quadtree as a specific application scene for example, firstly, adaptively creating an effective illumination probe in space voxels in a game scene, then placing an effective illumination probe in each voxel corner, specifically dividing each space voxel into 4 voxels, further repeatedly dividing each voxel, forming the space voxels into an octree structure to form a2 x2 probe grid, recording voxels close to the surface of an object, placing the effective illumination probe on the voxel nodes, then adding virtual voxels and virtual illumination probes at the corresponding level of the space voxels in the game scene, realizing seamless interpolation of illumination information when sampling between voxels of a larger level and voxels of a smaller level, further combining illumination information captured by the effective illumination probe and the virtual illumination probe and storing the illumination information captured by the virtual illumination probe into texture resource information, the texture resource information can be expanded layer by layer in the rendering process, each three-dimensional block data in the texture resource information of each layer has an independent number, the three-dimensional block data of each layer is usually combined with the illumination information captured by the effective illumination probe and the virtual illumination probe, the combination processing can be continued for the three-dimensional block data which are not combined and stored, as the expanded three-dimensional block data set is composed of a storage structure which is composed of a plurality of textures with the same block size layout, the storage structure can realize indirect addressing in the rendering process, so that after the three-dimensional block data caching is completed, the three-dimensional block data can be acquired from the cache, the whole rendering process is as shown in fig. 5a-5b, the tree structure is utilized to construct indirect textures which are expanded layer by layer, the indirect textures are sampled in the rendering process, the obtained content is the level and the offset of the three-dimensional block data cached in the tree structure, so that the sampling position for the illumination probe in the block cache can be calculated.
According to the embodiment of the application, spatial voxels with hierarchical relations in a tree structure are unfolded to form an illumination probe set to form a three-dimensional block data set, the hierarchical relation of each three-dimensional block data in the tree structure is stored in an indirect mapping texture, then in the game scene rendering process, each frame of scene data updates illumination information captured by an illumination probe in the tree structure according to viewpoint positions and is unfolded to the three-dimensional block data set, the hierarchical relation is recorded to the indirect texture in the unfolding process and is then transmitted to a GPU, when sampling is performed, according to the world positions of sampling points on the surface of an object, a certain hierarchical relation in the corresponding tree structure is obtained from the indirect texture, and finally illumination information formed by sampling a final illumination probe in the three-dimensional block data is utilized to render the illumination information.
Further, as a specific implementation of the method of fig. 1, an embodiment of the present application provides a device for rendering illumination information in a game scene, as shown in fig. 6, where the device includes an extracting unit 41, a first setting unit 42, a second setting unit 43, and a first rendering unit 44.
The extracting unit 41 may be configured to segment a spatial region in a game scene by using a data structure of the spatial region, and extract spatial voxels containing objects, where the data structure is grid data containing multiple levels;
The first setting unit 42 may be configured to traverse the grid data of the multiple levels, set an effective illumination probe for the spatial voxel containing the object, and obtain a first illumination probe grid of the spatial region, where the first illumination probe grid includes an effective illumination probe;
A second setting unit 43, configured to set up a virtual illumination probe for a spatial voxel that does not include an object, to obtain a second illumination probe grid of the spatial region, where the second illumination probe grid includes an effective illumination probe and a virtual illumination probe;
the first rendering unit 44 may be configured to transmit the illumination information collected by the second illumination probe grid to texture resource information, and render the illumination information in the game scene according to the texture resource information.
In a specific application scenario, as shown in fig. 7, the data structure is a tree structure formed by the spatial regions, and the extracting unit 41 includes:
the segmentation module 411 may be configured to perform multi-level segmentation on the spatial region by using a tree structure formed by the spatial region, so as to form spatial voxels of multiple levels;
The first determining module 412 may be configured to determine, during the splitting process of the multi-level spatial region, whether the spatial voxels in the split level include an object;
the extracting module 413 may be configured to, if so, perform a next-level segmentation on the spatial voxels in the segmented level until the number of levels reaches a preset threshold, and extract the spatial voxels including the object.
In a specific application scenario, as shown in fig. 7, the first setting unit 42 includes:
A first obtaining module 421, configured to obtain, for a spatial voxel including an object, a grid position where the spatial voxel is located in grid data of multiple levels;
The first setting module 422 may be configured to set an effective illumination probe at a vertex in the first grid position, to obtain a first illumination probe grid of the spatial region.
In a specific application scenario, as shown in fig. 7, the second setting unit 43 includes:
A second obtaining module 431, configured to traverse the grid data of the multiple levels, and obtain, for a spatial voxel that does not include an object in each level, a second grid position where the spatial voxel is located in the grid data of the multiple levels;
A second judging module 432, configured to judge whether valid illumination probes are set at four vertices of the grid position;
And the second setting module 433 may be configured to set up a virtual illumination probe for the vertex of the grid position, where no effective illumination probe is set up, if not, to obtain a second illumination probe grid of the spatial region.
In a specific application scenario, as shown in fig. 7, the apparatus further includes:
The extracting unit 45 may be configured to extract a spatial logic relationship mapped by the illumination information in the second illumination probe grid before transmitting the illumination information acquired by the second illumination probe grid to texture resource information and rendering the illumination information in the game scene according to the texture resource information, and store the spatial logic relationship in the texture resource information.
In a specific application scenario, as shown in fig. 7, the first rendering unit 44 includes:
the expansion module 441 may be configured to expand illumination information in the texture resource information into a three-dimensional block data set according to the spatial logical relationship, and record a hierarchical relationship between the three-dimensional block data sets by using an indirect texture;
the merging module 442 may be configured to merge the three-dimensional block data sets by using a hierarchical relationship between the three-dimensional block data sets in the indirect texture to form three-dimensional block texture information of a tree structure, where a spatial position of the illumination information in the game scene is recorded in the three-dimensional block texture information;
the reading module 443 may be configured to read, according to a spatial position of the viewpoint position in the game scene, illumination information of a corresponding spatial position from the three-dimensional block texture information of the tree structure for rendering.
In a specific application scenario, the unfolding module 441 may be specifically configured to extract, according to the spatial logic relationship, a hierarchical distribution of the effective illumination probe and the virtual illumination probe in the second illumination probe grid;
the expansion module 441 may be further specifically configured to expand the illumination information in the texture resource information into the three-dimensional block data according to the hierarchical distribution of the effective illumination probes and the virtual illumination probes in the second illumination probe grid, and record the hierarchical relationship between the three-dimensional block data sets by using indirect textures.
In a specific application scenario, the reading module 443 may be specifically configured to obtain, according to a spatial position of the viewpoint position in the game scenario, an indirect texture that characterizes a hierarchical relationship between three-dimensional block datasets from three-dimensional block texture information of the tree structure;
The reading module 443 may be further specifically configured to use the indirect texture that characterizes the hierarchical relationship between the three-dimensional block data sets to read illumination information of a corresponding spatial position in the three-dimensional block texture information of the tree structure for rendering.
It should be noted that, other corresponding descriptions of each functional unit related to the rendering device of illumination information in a game scene provided in this embodiment may refer to corresponding descriptions in fig. 1, and are not described herein again.
Further, as a specific implementation of the methods of fig. 3 and fig. 4, an embodiment of the present application provides a device for rendering illumination information in a game scene, as shown in fig. 8, where the device includes an obtaining unit 51, a creating unit 52, a storage unit 53, and a second rendering unit 54.
The obtaining unit 51 may be configured to obtain a near-distance voxel included in a spatial voxel in the game scene, where the near-distance voxel is a voxel that meets a distance condition in voxels of a preset level formed by spatial voxel segmentation in the game scene, where the distance condition is that a bounding box of the voxel intersects with an object bounding box in the game scene;
a creating unit 52, configured to create an effective illumination probe and a virtual illumination probe for a short-distance voxel contained in the spatial voxel, and generate a probe grid of the spatial voxel, where the probe grid is used to capture illumination information in a game scene;
The storage unit 53 may be configured to combine and store illumination information captured by the probe grid of the spatial voxel into texture resource information according to a viewpoint position in the game scene;
the second rendering unit 54 may be configured to establish a rendering task using the texture resource information in response to a rendering instruction of the illumination information, and render the illumination information in the game scene.
Compared with the prior art, the device for rendering illumination information in the game scene provided by the embodiment of the application can provide indirect illumination for a large number of complex objects by using the mode of sampling the illumination information pixel by pixel, is suitable for various illumination acquisition scenes, and avoids the limitation of the update cost and the transmission bandwidth of 3D textures from the CPU end to the GPU end on the total number of objects sampled by object. Compared with the prior art that illumination information is sampled pixel by pixel, the method and the system have the advantages that the short-distance voxels contained in the space voxels in the game scene are obtained, the short-distance voxels are the voxels close to the surface of the object, a larger number of illumination probes are needed, effective illumination probes and virtual illumination probes are further created for the short-distance voxels contained in the space voxels, probe grids of the space voxels are generated, the illumination probes and the virtual illumination probes can be adaptively distributed, the virtual illumination probes are used as auxiliary effective illumination probes, seamless interpolation is provided for the game scene, illumination information captured by the probe grids of the space voxels is combined and stored in texture resource information according to viewpoint positions in the game scene, the texture resource information can cope with illumination information sampling of different game scenes, so that the same-screen pixels can sample interpolation results of different surface densities according to different spatial positions, further, when a rendering instruction of the illumination information is responded, a rendering task is established by using texture resource information, three-dimensional texture addresses of the game scene are cached, the number of the pixel texture probes in the game scene is determined, the number of the illumination probes is greatly increased, the illumination information of the game scene is saved, the illumination information is saved by the aid of the texture probes, the image processing efficiency is greatly, the illumination information is saved, the illumination information is required to be adaptively transmitted by the GPU, the illumination information is saved, and the image processing cost is reduced, and the illumination information is required to be sampled by the image processing, and the image processing cost is greatly is reduced.
In a specific application scenario, as shown in fig. 9, the obtaining unit 51 includes:
the segmentation module 511 may be configured to obtain a spatial region covered by an object to be hooked in the game scene, and segment each spatial voxel in the spatial region into voxels of a preset level;
a third judging module 512, configured to traverse each voxel in the preset hierarchy, and judge that the bounding box of the segmented voxel intersects with the object bounding box in the game scene;
a determining module 513 may be configured to determine, if yes, that the segmented voxel is a near voxel of the object surface in the game scene.
In a specific application scenario, as shown in fig. 9, the creating unit 52 includes:
a creation module 521, configured to create an effective illumination probe for a close-range voxel within the hierarchy of the spatial voxels;
The adding module 522 may be configured to add a virtual voxel corresponding to a hierarchy for the spatial voxel, and create a virtual illumination probe for a voxel conforming to an addition condition in the virtual voxel, where the virtual illumination probe is configured to perform seamless interpolation on illumination data sampled by a voxel at a close distance in the hierarchy in the spatial voxel.
In a specific application scenario, as shown in fig. 9, the add-on module 522 includes:
The adding submodule 5221 can be used for adding virtual voxels corresponding to the levels in the spatial voxels for the levels larger than the first level, wherein the virtual voxels are mapped with the voxels of the corresponding levels in the spatial voxels;
the judging submodule 5222 can be used for traversing virtual voxels corresponding to the hierarchy and judging whether valid illumination probes exist in phase mapping voxels in the space voxels;
the create submodule 5223 may be used to create a virtual illumination probe for a virtual voxel if not.
In a specific application scenario, as shown in fig. 9, the apparatus further includes:
The unfolding unit 55 may be configured to unfold, according to the viewpoint position in the game scene, the illumination information captured by the probe grid of the spatial voxel into a three-dimensional block data set before merging and storing the illumination information captured by the probe grid of the spatial voxel into texture resource information according to the viewpoint position in the game scene, so as to form three-dimensional block texture information of a multi-level tree structure;
A recording unit 56, configured to record, in the process of expanding into the three-dimensional block data set, a hierarchical relationship mapped by the three-dimensional block data set into indirect texture information;
The storage unit 53 may be further configured to combine the three-dimensional block texture information and the indirect texture information and store the combined three-dimensional block texture information and the indirect texture information into texture resource information.
In a specific application scenario, as shown in fig. 9, the expanding unit 55 includes:
The operation module 551 may be configured to sample illumination data in the game scene by using an effective illumination probe in the probe grid, and perform interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene;
The interpolation module 552 can be used for performing seamless interpolation on illumination data sampled by the close-range voxels in the hierarchy in the space voxels by using the virtual illumination probe in the probe grid to obtain second illumination information of the viewpoint position in the game scene;
The expansion module 553 can be used for expanding the first illumination information and the second illumination information of the viewpoint position in the game scene to a three-dimensional block data set to form three-dimensional block texture information of a multi-level tree structure.
In a specific application scenario, as shown in fig. 9, the second rendering unit 54 includes:
The obtaining module 541 may be configured to establish a rendering task by using the texture resource information, and obtain a hierarchical relationship mapped by three-dimensional block texture information from the indirect texture information;
a query module 542, configured to query position information of the three-dimensional texture information in a tree structure according to a hierarchical relationship mapped by the three-dimensional block texture information;
the sampling module 543 may be configured to sample, according to a position of the three-dimensional texture information in the tree structure, illumination information captured by a probe grid of the spatial voxel from the three-dimensional block texture information, and render the illumination information.
In a specific application scenario, as shown in fig. 9, the query module 542 includes:
The extraction submodule 5421 is configured to extract a level and an offset of the three-dimensional block texture information in the tree structure according to a level relation mapped by the three-dimensional block texture information;
The calculating submodule 5422 may be used to calculate the position information of the three-dimensional texture information in the tree structure according to the level and the offset of the three-dimensional block texture information in the tree structure.
It should be noted that, other corresponding descriptions of each functional unit related to the rendering device for illumination information in a game scene provided in this embodiment may refer to corresponding descriptions in fig. 1-2, and are not described herein again.
Based on the method shown in fig. 1, correspondingly, the embodiment of the application also provides a storage medium, on which a computer program is stored, which when being executed by a processor, implements the method for rendering illumination information in the game scene shown in fig. 1.
Based on the method shown in fig. 3-4, correspondingly, the embodiment of the application also provides a storage medium, on which a computer program is stored, which when executed by a processor, implements the method for rendering illumination information in the game scene shown in fig. 3-4.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
Based on the method shown in fig. 1 and fig. 3 to fig. 4 and the virtual device embodiment shown in fig. 6 to fig. 9, in order to achieve the above objective, the embodiment of the present application further provides an entity device for rendering illumination information in a game scene, which may specifically be a computer, a smart phone, a tablet computer, a smart watch, a server, or a network device, where the entity device includes a storage medium and a processor, where the storage medium is used to store a computer program, and the processor is used to execute the computer program to implement the method for rendering illumination information in the game scene shown in fig. 1 and fig. 3 to fig. 4.
Optionally, the physical device may further include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, WI-FI modules, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
In an exemplary embodiment, referring to fig. 10, the entity device includes a communication bus, a processor, a memory, a communication interface, an input/output interface, and a display device, where each functional unit may perform communication with each other through the bus. The memory stores a computer program and a processor for executing the program stored in the memory, and executing the painting mounting method in the above embodiment.
It will be appreciated by those skilled in the art that the structure of the entity device for rendering illumination information in a game scene provided in this embodiment is not limited to the entity device, and may include more or fewer components, or may combine some components, or may be different in component arrangement.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources of the store search information processing described above, supporting the execution of information processing programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. Compared with the prior art, the technical scheme of the application has the advantages that the scene topography is not changed for the topography data frame with unchanged topography elements in the game scene, the mixed texture information formed by the topography data frame which is changed last time can be used for executing the game scene rendering, the multi-texture mixed operation is not required to be executed for each frame of topography scene, the time occupation of the texture rendering process is reduced, and the rendering speed of illumination information in the game scene is improved.
Those skilled in the art will appreciate that the drawing is merely a schematic illustration of a preferred implementation scenario and that the modules or flows in the drawing are not necessarily required to practice the application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely illustrative of some embodiments of the application, and the application is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the application.

Claims (11)

1.一种游戏场景中光照信息的渲染方法,其特征在于,包括:1. A method for rendering lighting information in a game scene, comprising: 获取游戏场景中空间体素包含的近距离体素,所述近距离体素为游戏场景中空间体素分割所形成预设层级的体素中符合距离条件的体素,所述距离条件为体素的包围盒与游戏场景中物体包围盒相交;Acquire close-distance voxels contained in the spatial voxels in the game scene, wherein the close-distance voxels are voxels that meet the distance condition in the voxels of a preset level formed by the segmentation of the spatial voxels in the game scene, and the distance condition is that the bounding box of the voxel intersects with the bounding box of the object in the game scene; 为所述空间体素包含的近距离体素创建有效光照探针和虚拟光照探针,生成空间体素的探针网格,所述探针网格用于捕获游戏场景中的光照信息,所述虚拟光照探针用于关联各个层级内有效光照探针,所述有效光照探针用于针对视点位置采集游戏场景中变化的光照数据以形成探针网格捕获的光照信息;Creating effective lighting probes and virtual lighting probes for the near-distance voxels contained in the spatial voxels, generating a probe grid of the spatial voxels, wherein the probe grid is used to capture lighting information in the game scene, the virtual lighting probe is used to associate effective lighting probes in each level, and the effective lighting probe is used to collect lighting data of changes in the game scene according to the viewpoint position to form lighting information captured by the probe grid; 根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息合并存储至纹理资源信息中;Merging and storing the lighting information captured by the probe grid of the spatial voxels into the texture resource information according to the viewpoint position in the game scene; 响应于光照信息的渲染指令,利用所述纹理资源信息建立渲染任务,对所述游戏场景中光照信息进行渲染。In response to a rendering instruction of the lighting information, a rendering task is established using the texture resource information to render the lighting information in the game scene. 2.根据权利要求1所述的方法,其特征在于,所述获取游戏场景中物体表面的近距离体素,具体包括:2. The method according to claim 1, characterized in that the obtaining of close-range voxels on the surface of an object in the game scene specifically comprises: 获取游戏场景中待挂接物体覆盖的空间区域,将所述空间区域内的每个空间体素分割为预设层级的体素;Acquire a spatial region covered by the object to be mounted in the game scene, and divide each spatial voxel in the spatial region into voxels of a preset level; 遍历预设层级内的每个体素,判断分割后体素的包围盒与游戏场景中物体包围盒相交;Traverse each voxel in the preset hierarchy and determine whether the bounding box of the segmented voxel intersects with the bounding box of the object in the game scene; 若是,则确定所述分割后体素为游戏场景中物体表面的近距离体素。If so, it is determined that the segmented voxel is a close-range voxel on the surface of an object in the game scene. 3.根据权利要求1所述的方法,其特征在于,所述为所述空间体素包含的近距离体素创建有效光照探针和虚拟光照探针,生成空间体素的探针网格,具体包括:3. The method according to claim 1, wherein creating effective lighting probes and virtual lighting probes for the close-range voxels contained in the spatial voxels to generate a probe grid of the spatial voxels specifically comprises: 针对所述空间体素中处于层级内的近距离体素,创建有效光照探针;Creating valid light probes for the near voxels in the hierarchy among the spatial voxels; 为所述空间体素增设层级对应的虚拟体素,针对所述虚拟体素中符合添加条件的体素创建虚拟光照探针,所述虚拟光照探针用于对所述空间体素中处于层级内近距离体素所采样的光照数据进行无缝插值。Virtual voxels corresponding to the level are added to the spatial voxels, and virtual lighting probes are created for the voxels in the virtual voxels that meet the adding conditions. The virtual lighting probes are used to seamlessly interpolate the lighting data sampled by the close-distance voxels in the level of the spatial voxels. 4.根据权利要求1所述的方法,其特征在于,所述为所述空间体素增设层级对应的虚拟体素,针对所述虚拟体素中符合添加条件的体素创建虚拟光照探针,具体包括:4. The method according to claim 1, characterized in that the step of adding virtual voxels corresponding to a level to the spatial voxels and creating virtual illumination probes for voxels in the virtual voxels that meet the adding conditions specifically comprises: 针对所述空间体素中大于一阶的层级,增设层级对应的虚拟体素,所述虚拟体素与所述空间体素中对应层级的体素相映射;For the levels greater than the first order in the spatial voxels, virtual voxels corresponding to the levels are added, and the virtual voxels are mapped to the voxels of the corresponding levels in the spatial voxels; 遍历层级对应的虚拟体素,判断所述空间体素中相映射体素是否存在有效光照探针;Traversing the virtual voxels corresponding to the level, and determining whether there is a valid illumination probe in the phase-mapped voxel in the spatial voxel; 若否,则针对虚拟体素创建虚拟光照探针。If not, a virtual light probe is created for the virtual voxel. 5.根据权利要求4所述的方法,其特征在于,在所述根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息合并存储至纹理资源信息中之前,所述方法还包括:5. The method according to claim 4, characterized in that before merging and storing the lighting information captured by the probe grid of the spatial voxels into the texture resource information according to the viewpoint position in the game scene, the method further comprises: 根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息展开到三维分块数据集,形成多层级树形结构的三维分块纹理信息;Expanding the illumination information captured by the probe grid of the spatial voxels into a three-dimensional block data set according to the viewpoint position in the game scene to form three-dimensional block texture information with a multi-level tree structure; 在展开到三维分块数据集的过程中,将所述三维分块数据集映射的层级关系记录至间接纹理信息中;In the process of expanding to the three-dimensional block data set, the hierarchical relationship of the three-dimensional block data set mapping is recorded in the indirect texture information; 所述所述根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息合并存储至纹理资源信息中,具体包括:The step of merging and storing the illumination information captured by the probe grid of the spatial voxels into the texture resource information according to the viewpoint position in the game scene specifically includes: 将所述三维分块纹理信息与所述间接纹理信息合并存储至纹理资源信息中。The three-dimensional block texture information and the indirect texture information are combined and stored in the texture resource information. 6.根据权利要求1-5中任一项所述的方法,其特征在于,所述根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息展开到三维分块数据集,形成多层级树形结构的三维分块纹理信息,具体包括:6. The method according to any one of claims 1 to 5, characterized in that the step of expanding the illumination information captured by the probe grid of the spatial voxels into a three-dimensional block data set according to the viewpoint position in the game scene to form three-dimensional block texture information of a multi-level tree structure specifically comprises: 利用所述探针网格中有效光照探针采样游戏场景中的光照数据,并对所述光照数据进行插值运算,得到游戏场景中视点位置的第一光照信息;Using the effective illumination probes in the probe grid to sample illumination data in the game scene, and performing an interpolation operation on the illumination data to obtain first illumination information of a viewpoint position in the game scene; 利用所述探针网格中虚拟光照探针对所述空间体素中处于层级内近距离体素所采样的光照数据进行无缝插值,得到游戏场景中视点位置的第二光照信息;Using the virtual lighting probes in the probe grid, the lighting data sampled by the close-distance voxels in the spatial voxels are seamlessly interpolated to obtain the second lighting information of the viewpoint position in the game scene; 将所述游戏场景中视点位置的第一光照信息和第二光照信息展开到三维分块数据集,形成多层级树形结构的三维分块纹理信息。The first illumination information and the second illumination information of the viewpoint position in the game scene are expanded into a three-dimensional block data set to form three-dimensional block texture information with a multi-level tree structure. 7.根据权利要求5所述的方法,其特征在于,所述利用所述纹理资源信息建立渲染任务,对所述游戏场景中光照信息进行渲染,具体包括:7. The method according to claim 5, characterized in that the step of using the texture resource information to establish a rendering task and rendering the lighting information in the game scene specifically comprises: 利用所述纹理资源信息建立渲染任务,从所述间接纹理信息中获取三维分块纹理信息映射的层级关系;Using the texture resource information to establish a rendering task, and obtaining a hierarchical relationship of three-dimensional block texture information mapping from the indirect texture information; 根据所述三维分块纹理信息映射的层级关系,查询三维纹理信息在树形结构中的位置信息;According to the hierarchical relationship of the three-dimensional block texture information mapping, querying the position information of the three-dimensional texture information in the tree structure; 按照所述三维纹理信息在树形结构中的位置位置从所述三维分块纹理信息中采样所述空间体素的探针网格所捕获的光照信息,对所述光照信息进行渲染。The illumination information captured by the probe grid of the spatial voxels is sampled from the three-dimensional block texture information according to the position of the three-dimensional texture information in the tree structure, and the illumination information is rendered. 8.根据权利要求7所述的方法,其特征在于,所述根据所述三维分块纹理信息映射的层级关系,查询所述三维纹理信息在树形结构中的位置信息,具体包括:8. The method according to claim 7, characterized in that querying the position information of the three-dimensional texture information in the tree structure according to the hierarchical relationship of the three-dimensional block texture information mapping specifically comprises: 根据所述三维分块纹理信息映射的层级关系,提取所述三维分块纹理信息在树形结构中所处层级和偏移量;Extracting the level and offset of the three-dimensional block texture information in the tree structure according to the hierarchical relationship of the three-dimensional block texture information mapping; 根据所述三维分块纹理信息在树形结构中所处层级和偏移量,计算所述三维纹理信息在树形结构中的位置信息。According to the level and offset of the three-dimensional block texture information in the tree structure, the position information of the three-dimensional texture information in the tree structure is calculated. 9.一种游戏场景中光照信息的渲染装置,其特征在于,包括:9. A device for rendering lighting information in a game scene, characterized by comprising: 获取单元,用于获取游戏场景中空间体素包含的近距离体素,所述近距离体素为游戏场景中空间体素分割所形成预设层级的体素中符合距离条件的体素,所述距离条件为体素的包围盒与游戏场景中物体包围盒相交;An acquisition unit is used to acquire close-distance voxels contained in the spatial voxels in the game scene, wherein the close-distance voxels are voxels that meet a distance condition among the voxels of a preset level formed by segmenting the spatial voxels in the game scene, and the distance condition is that a bounding box of the voxel intersects a bounding box of an object in the game scene; 创建单元,用于为所述空间体素包含的近距离体素创建有效光照探针和虚拟光照探针,生成空间体素的探针网格,所述探针网格用于捕获游戏场景中的光照信息,所述虚拟光照探针用于关联各个层级内有效光照探针,所述有效光照探针用于针对视点位置采集游戏场景中变化的光照数据以形成探针网格捕获的光照信息;A creation unit, configured to create effective lighting probes and virtual lighting probes for the near-distance voxels contained in the spatial voxels, and generate a probe grid of the spatial voxels, wherein the probe grid is used to capture lighting information in the game scene, the virtual lighting probe is used to associate effective lighting probes in each level, and the effective lighting probe is used to collect lighting data of changes in the game scene according to the viewpoint position to form lighting information captured by the probe grid; 存储单元,用于根据游戏场景中视点位置将所述空间体素的探针网格所捕获的光照信息合并存储至纹理资源信息中;A storage unit, for merging and storing the lighting information captured by the probe grid of the spatial voxels into the texture resource information according to the viewpoint position in the game scene; 第二渲染单元,用于响应于光照信息的渲染指令,利用所述纹理资源信息建立渲染任务,对所述游戏场景中光照信息进行渲染。The second rendering unit is used to respond to the rendering instruction of the lighting information, establish a rendering task using the texture resource information, and render the lighting information in the game scene. 10.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述游戏场景中光照信息的渲染方法的步骤。10. A computer device comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method for rendering lighting information in a game scene as described in any one of claims 1 to 8 when executing the computer program. 11.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述游戏场景中光照信息的渲染方法的步骤。11. A computer-readable storage medium having a computer program stored thereon, characterized in that when the computer program is executed by a processor, the steps of the method for rendering lighting information in a game scene as described in any one of claims 1 to 8 are implemented.
CN202210325539.9A 2021-03-30 2021-03-30 Method, device and equipment for rendering lighting information in game scenes Active CN115131482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210325539.9A CN115131482B (en) 2021-03-30 2021-03-30 Method, device and equipment for rendering lighting information in game scenes

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210325539.9A CN115131482B (en) 2021-03-30 2021-03-30 Method, device and equipment for rendering lighting information in game scenes
CN202110342331.3A CN113034657B (en) 2021-03-30 2021-03-30 Rendering method, device and equipment for illumination information in game scene

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110342331.3A Division CN113034657B (en) 2021-03-30 2021-03-30 Rendering method, device and equipment for illumination information in game scene

Publications (2)

Publication Number Publication Date
CN115131482A CN115131482A (en) 2022-09-30
CN115131482B true CN115131482B (en) 2025-02-18

Family

ID=76452932

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110342331.3A Active CN113034657B (en) 2021-03-30 2021-03-30 Rendering method, device and equipment for illumination information in game scene
CN202210325539.9A Active CN115131482B (en) 2021-03-30 2021-03-30 Method, device and equipment for rendering lighting information in game scenes

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110342331.3A Active CN113034657B (en) 2021-03-30 2021-03-30 Rendering method, device and equipment for illumination information in game scene

Country Status (1)

Country Link
CN (2) CN113034657B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299220B (en) * 2021-11-19 2024-11-15 腾讯科技(成都)有限公司 Lightmap data generation method, device, equipment, medium and program product
CN116740255A (en) * 2022-03-02 2023-09-12 腾讯科技(深圳)有限公司 Rendering processing method, device, equipment and medium
CN116934946A (en) * 2022-04-02 2023-10-24 腾讯科技(深圳)有限公司 Illumination rendering method and device for virtual terrain, storage medium and electronic equipment
CN114782615A (en) * 2022-04-21 2022-07-22 广东三维家信息科技有限公司 Real-time rendering method and device of indoor scene, electronic equipment and storage medium
CN118096985B (en) * 2023-07-11 2024-12-06 北京艾尔飞康航空技术有限公司 Real-time rendering method and device for virtual forest scene
CN117876572B (en) * 2024-03-13 2024-08-16 腾讯科技(深圳)有限公司 Illumination rendering method, device, equipment and storage medium
CN118070403B (en) * 2024-04-17 2024-07-23 四川省建筑设计研究院有限公司 BIM-based method and system for automatically generating lamp loop influence area space

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204701A (en) * 2016-06-22 2016-12-07 浙江大学 A kind of rendering intent based on light probe interpolation dynamic calculation indirect reference Gao Guang
CN111340926A (en) * 2020-03-25 2020-06-26 北京畅游创想软件技术有限公司 Rendering method and device
CN111744183A (en) * 2020-07-02 2020-10-09 网易(杭州)网络有限公司 Illumination sampling method and device in game and computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198515B (en) * 2013-04-18 2016-05-25 北京尔宜居科技有限责任公司 In a kind of instant adjusting 3D scene, object light is according to the method for rendering effect
US9390548B2 (en) * 2014-06-16 2016-07-12 Sap Se Three-dimensional volume rendering using an in-memory database
CN104574489B (en) * 2014-12-16 2017-11-03 中国人民解放军理工大学 Landform and motion vector integrated approach based on lamination quaternary tree atlas
EP3337585B1 (en) * 2015-08-17 2022-08-10 Lego A/S Method of creating a virtual game environment and interactive game system employing the method
CN111798558A (en) * 2020-06-02 2020-10-20 完美世界(北京)软件科技发展有限公司 Data processing method and device
CN112169324A (en) * 2020-09-22 2021-01-05 完美世界(北京)软件科技发展有限公司 Rendering method, device and equipment of game scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204701A (en) * 2016-06-22 2016-12-07 浙江大学 A kind of rendering intent based on light probe interpolation dynamic calculation indirect reference Gao Guang
CN111340926A (en) * 2020-03-25 2020-06-26 北京畅游创想软件技术有限公司 Rendering method and device
CN111744183A (en) * 2020-07-02 2020-10-09 网易(杭州)网络有限公司 Illumination sampling method and device in game and computer equipment

Also Published As

Publication number Publication date
CN113034657B (en) 2022-04-22
CN113034657A (en) 2021-06-25
CN115131482A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN114549723B (en) Method, device and equipment for rendering lighting information in game scenes
CN115131482B (en) Method, device and equipment for rendering lighting information in game scenes
CN110738721B (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
US11734879B2 (en) Graphics processing using directional representations of lighting at probe positions within a scene
US11804002B2 (en) Techniques for traversing data employed in ray tracing
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
US12190435B2 (en) Enhanced techniques for traversing ray tracing acceleration structures
JP2009525526A (en) Method for synthesizing virtual images by beam emission
JP6864495B2 (en) Drawing Global Illumination in 3D scenes
US6791544B1 (en) Shadow rendering system and method
US11508112B2 (en) Early release of resources in ray tracing hardware
KR101100650B1 (en) Indirect Lighting Representation and Multi-layer Displacement Mapping System Using Map Data and Its Method
US20240009226A1 (en) Techniques for traversing data employed in ray tracing
CN116993894B (en) Virtual picture generation method, device, equipment, storage medium and program product
US12154214B2 (en) Generation and traversal of partial acceleration structures for ray tracing
Hoppe et al. Adaptive meshing and detail-reduction of 3D-point clouds from laser scans
Atanasov et al. Efficient Rendering of Digital Twins Consisting of Both Static And Dynamic Data
CN118135080A (en) Laser point cloud model rendering method and system
Djeu High quality, high performance rendering using shadow ray acceleration and aggressive micropolygon tessellation rates
ARCH for the degree of Master of Science in the Department of Game Technologies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220930

Assignee: Beijing Xuanguang Technology Co.,Ltd.

Assignor: Perfect world (Beijing) software technology development Co.,Ltd.

Contract record no.: X2022990000514

Denomination of invention: Rendering method, device and device of lighting information in game scene

License type: Exclusive License

Record date: 20220817

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant