[go: up one dir, main page]

CN118172465A - Virtual scene rendering method and device, computer equipment and storage medium - Google Patents

Virtual scene rendering method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN118172465A
CN118172465A CN202410316302.3A CN202410316302A CN118172465A CN 118172465 A CN118172465 A CN 118172465A CN 202410316302 A CN202410316302 A CN 202410316302A CN 118172465 A CN118172465 A CN 118172465A
Authority
CN
China
Prior art keywords
rendering
data
virtual scene
gpu
geometric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410316302.3A
Other languages
Chinese (zh)
Inventor
李炯
杜双泓
韦洪宇
苏磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuming Technology Hangzhou Co ltd
Original Assignee
Wuming Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuming Technology Hangzhou Co ltd filed Critical Wuming Technology Hangzhou Co ltd
Priority to CN202410316302.3A priority Critical patent/CN118172465A/en
Publication of CN118172465A publication Critical patent/CN118172465A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a virtual scene rendering method, a virtual scene rendering device, computer equipment and a storage medium, and belongs to the field of graphic rendering. The method comprises the following steps: responding to a rendering request of a target virtual scene, storing mirror image data corresponding to target rendering data into a storage space of the GPU by the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure which can be identified by the GPU; generating, by the GPU, at least one rendering instruction for the target virtual scene based on the mirrored data; and rendering the target virtual scene based on the rendering mode indicated by the at least one rendering instruction through the GPU. According to the technical scheme, the rendering instruction can be generated by utilizing the parallel computing capability of the GPU, so that the CPU does not need to generate the rendering instruction, the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved.

Description

Virtual scene rendering method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of graphics rendering, and in particular, to a method and apparatus for rendering a virtual scene, a computer device, and a storage medium.
Background
In the field of graphics rendering, a rendering pipeline is typically employed to perform specific graphics rendering tasks. The rendering pipeline is used to indicate the process of generating or rendering a 2D image from the perspective of a virtual camera given the geometric description of a 3D scene and a determined position and orientation of the virtual camera. This process is done jointly by a central processing unit CPU and a graphics processing unit GPU. The CPU is used for loading data required for rendering into the video memory from the system memory, and then generating a series of rendering instructions by processing the data so as to instruct the GPU to render objects in the scene according to the rendering state set by the CPU. The GPU can call the computing unit to process the data transmitted by the CPU based on the rendering instruction, and finally outputs a 2D image. However, in the above process, the CPU needs to perform a large amount of calculation operations in the process of generating the rendering instruction, resulting in a large calculation pressure of the CPU, and easy performance bottleneck, thereby resulting in lower rendering efficiency. Therefore, how to reduce the workload of the CPU in the rendering process to improve the rendering efficiency is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a virtual scene rendering method, a virtual scene rendering device, computer equipment and a storage medium, which can generate rendering instructions by utilizing the parallel computing capability of a GPU, so that a CPU does not need to generate the rendering instructions by itself, the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved. The technical scheme is as follows:
In one aspect, a virtual scene rendering method is provided, and is applied to a computer device, wherein the computer device comprises a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU), and the method comprises the following steps:
Responding to a rendering request of a target virtual scene, storing mirror image data corresponding to target rendering data into a storage space of the GPU by the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure identifiable by the GPU;
Generating, by the GPU, at least one rendering instruction of the target virtual scene based on the mirrored data, the at least one rendering instruction being configured to indicate a rendering manner of the target virtual scene;
And rendering the target virtual scene by the GPU based on the rendering mode indicated by the at least one rendering instruction.
In another aspect, there is provided a rendering apparatus of a virtual scene configured in a computer device including a central processing unit CPU and a graphics processing unit GPU, the apparatus comprising:
The storage module is used for responding to a rendering request of a target virtual scene, and storing mirror image data corresponding to target rendering data into a storage space of the GPU through the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure identifiable by the GPU;
the generating module is used for generating at least one rendering instruction of the target virtual scene based on the mirror image data through the GPU, wherein the at least one rendering instruction is used for indicating the rendering mode of the target virtual scene;
And the rendering module is used for rendering the target virtual scene through the GPU based on the rendering mode indicated by the at least one rendering instruction.
In some embodiments, the memory module comprises:
The structure conversion unit is used for responding to a rendering request of a target virtual scene, converting a data structure of the target rendering data from a first data structure to a second data structure through the CPU so as to obtain mirror image data corresponding to the target rendering data, wherein the first data structure is a data structure identifiable by the CPU, and the second data structure is a data structure identifiable by the GPU;
and the data storage unit is used for storing the mirror image data into the storage space of the GPU through the CPU.
In some embodiments, the mirror data includes first mirror data and second mirror data, where the first mirror data is mirror data corresponding to geometric description data in the target rendering data, the second mirror data is mirror data corresponding to grid instance data in the target rendering data, the grid instance data is used to indicate object information of a geometric object in the target virtual scene, and the geometric description data is used to indicate description information of a geometric primitive required for rendering the geometric object;
the generating module comprises:
The changing unit is used for changing the description mode of the first mirror image data from a first description mode to a second description mode through the GPU, wherein the first description mode is a description mode based on a data function, and the second description mode is a description mode based on a data type;
The first processing unit is configured to encode and classify, by using the GPU, the first mirror data and the second mirror data based on a buffer area where a geometric object in the target virtual scene is located in the second mirror data, material information of the geometric object, and primitive format information of the geometric object in the first mirror data, so as to obtain at least one object set, where the buffer area is used to indicate a storage location of object information of the geometric object indicated by the second mirror data, and the primitive format information is used to indicate a data type of a geometric primitive required for rendering the geometric object, where the material information, the primitive format information, and the buffer area where the at least one geometric object is located are included in the object set;
The second processing unit is used for performing visibility test and shielding elimination on the geometric objects in the target virtual scene through the GPU to obtain a visibility list, wherein the visibility list comprises object identifiers of the geometric objects visible in the target virtual scene;
the generating unit is used for generating, through the GPU, the at least one rendering instruction based on the visibility list and the at least one object set, wherein the at least one rendering instruction corresponds to the at least one object set one to one, and the rendering instruction is used for indicating to render the geometric objects visible in the object set.
In some embodiments, the first processing unit is configured to encode, by using the GPU, the first image data and the second image data based on a buffer area where the geometric object is located, material information of the geometric object, and primitive format information of the geometric object, so as to obtain encoded information corresponding to the geometric object, where the encoded information includes buffer area encoding, material encoding, and primitive format encoding of the geometric object; and classifying the geometric objects in the target virtual scene based on the coding information of the geometric objects to obtain the at least one object set, wherein the buffer area coding, the material coding and the primitive format coding of at least one geometric object included in the object set are the same.
In some embodiments, the generating unit is configured to determine, by the GPU, for any one of the at least one object set, at least one visible geometric object from the object set based on the visibility list; and generating rendering instructions corresponding to the object set based on the at least one visible geometric object.
In some embodiments, the second processing unit is configured to perform, by using the GPU, a visibility test on a geometric object in the target virtual scene, to obtain a visibility of the geometric object in the target virtual scene, where the visibility is used to indicate a visibility degree of the geometric object; based on the visibility of the geometric objects in the target virtual scene, the GPU performs shielding elimination on the geometric objects in the target virtual scene to eliminate the geometric objects which are completely shielded in the target virtual scene; and generating the visibility list based on the geometrical objects in the target virtual scene after being eliminated.
In some embodiments, the apparatus further comprises:
The reading module is used for reading a visible material set from the GPU through the CPU, the visible material set comprises at least one target material identifier, the at least one target material identifier is an identifier of a material required for rendering a geometric object visible in the target virtual scene, and the at least one target material identifier corresponds to the at least one rendering instruction one by one;
And the sending module is used for sending a rendering instruction corresponding to the target material identifier in the at least one rendering instruction to the GPU based on any target material identifier in the visible material set through the CPU so as to instruct the GPU to render at least one geometric object in the target virtual scene based on the material corresponding to the target material identifier.
In another aspect, a computer device is provided, the computer device including a processor and a memory, the memory being configured to store at least one section of computer program, the at least one section of computer program being loaded and executed by the processor to implement a method for rendering a virtual scene in an embodiment of the application.
In another aspect, a computer readable storage medium is provided, in which at least one segment of a computer program is stored, the at least one segment of the computer program being loaded and executed by a processor to implement a method for rendering a virtual scene as in an embodiment of the application.
In another aspect, a computer program product is provided, comprising a computer program stored in a computer readable storage medium, the computer program being read from the computer readable storage medium by a processor of a computer device, the computer program being executed by the processor to cause the computer device to perform the method of rendering a virtual scene provided in the above aspects or various alternative implementations of the aspects.
The application discloses a virtual scene rendering method, which is characterized in that mirror image data corresponding to data required for rendering a target virtual scene is stored in a storage space of a GPU, so that the GPU can replace a CPU to process the mirror image data, and a rendering instruction of the target virtual scene is generated. By utilizing the parallel computing capability of the GPU to generate the rendering instruction, the CPU does not need to generate the rendering instruction, so that the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an implementation environment schematic diagram of a virtual scene rendering method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for rendering a virtual scene according to an embodiment of the present application;
FIG. 3 is a flow chart of another virtual scene rendering method provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a process for storing mirrored data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a virtual scene rendering process provided according to an embodiment of the present application;
fig. 6 is a block diagram of a virtual scene rendering apparatus according to an embodiment of the present application;
Fig. 7 is a block diagram of another virtual scene rendering apparatus provided according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution.
The term "at least one" in the present application means one or more, and the meaning of "a plurality of" means two or more.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, both the geometric description data and the grid instance data referred to in the present application are acquired with sufficient authorization.
Hereinafter, terms related to the present application will be explained.
Virtual scene: is the scene that an application displays (or provides) while running on a terminal. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual space, a 2.5-dimensional virtual space or a three-dimensional virtual space, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene includes sky, land, sea, etc., the land includes environmental elements of desert, city, etc., and the user can control the virtual object to move in the virtual scene.
The virtual scene rendering method provided by the embodiment of the application can be executed by computer equipment. In the following, taking a computer device as an example of a terminal, an implementation environment of a virtual scene rendering method provided by an embodiment of the present application is introduced, and fig. 1 is a schematic diagram of an implementation environment of a virtual scene rendering method provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
In some embodiments, terminal 101 is an electronic device such as, but not limited to, a smart phone, tablet, notebook, desktop, game console, and the like. The terminal 101 runs an application program supporting a virtual scene. The application may be a game engine for building virtual scenes and rendering instance data in the virtual scenes. Optionally, the application may also be a scene editor for editing virtual scenes in a game. The virtual scene may be a game scene of any one of an open world game, a first person shooter game, a third person shooter game, a multiplayer online tactical game, a virtual reality application, a three-dimensional map program, or a multiplayer gunfight survival game. The application is associated with the server 102 and background services are provided by the server 102.
In some embodiments, the server 102 is a stand-alone physical server, or can be a server cluster or a distributed system formed by a plurality of physical servers, or can be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In some embodiments, the server 102 takes on primary computing work and the terminal 101 takes on secondary computing work; or server 102 takes on secondary computing work and terminal 101 takes on primary computing work; or the server 102 and the terminal 101 perform cooperative computing by adopting a distributed computing architecture.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the application does not limit the number of terminals and the equipment type.
Fig. 2 is a flowchart of a virtual scene rendering method according to an embodiment of the present application, and referring to fig. 2, in an embodiment of the present application, an example of the method is described as being executed by a terminal. The virtual scene rendering method comprises the following steps:
201. In response to a rendering request for a target virtual scene, the terminal stores mirror image data corresponding to target rendering data into a storage space of the GPU through the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure identifiable by the GPU.
In the embodiment of the application, the terminal is provided with an application program supporting the virtual scene. The application program is used for constructing the virtual scene and rendering the data required by rendering the virtual scene according to a preset rendering pipeline. Accordingly, under the condition of running the application program, the terminal can respond to the rendering request of the target virtual scene, and store the mirror image data corresponding to the target rendering data into the storage space of the GPU (Graphics Processing Unit, the graphics processor) through the CPU (Central Processing Unit, the central processing unit), namely, establish the mirror image data of the target rendering data on the GPU. The target virtual scene may be a virtual scene provided by an open game world. The target rendering data is data required for rendering the target virtual scene, and includes geometric description data and grid instance data of the target virtual scene. It should be noted that, in the embodiment of the present application, the process of creating the mirror image data of the target rendering data on the GPU is not only to simply copy the target rendering data on the CPU to the GPU, but also to convert the data structure of the target rendering data into a data structure identifiable by the GPU, so as to store the mirror image data corresponding to the target rendering data on the GPU. The mirror data is the target rendering data with the data structure recognizable by the GPU.
202. And the terminal generates at least one rendering instruction of the target virtual scene based on the mirror image data through the GPU, wherein the at least one rendering instruction is used for indicating the rendering mode of the target virtual scene.
In the embodiment of the application, the terminal can be beneficial to the parallel computing capability of the GPU, and the mirror image data corresponding to the target rendering data stored in the storage space is processed to generate at least one rendering instruction of the target virtual scene. The at least one rendering instruction is used for indicating a rendering mode of the target virtual scene, namely, indicating how to render the geometric objects in the target virtual scene. The geometric objects may be characters, buildings, appliances, props, mountain trees, etc. in the target virtual scene, which the embodiments of the present application are not limited to.
203. And rendering the target virtual scene by the terminal through the GPU based on the rendering mode indicated by the at least one rendering instruction.
In the embodiment of the application, after at least one rendering instruction of the target virtual scene is generated by the GPU, the terminal can store the at least one rendering instruction into a unified rendering instruction buffer. Then, the terminal can directly call at least one rendering instruction from the rendering instruction buffer at the CPU end so as to instruct the GPU to render the target virtual scene based on the rendering mode indicated by the at least one rendering instruction.
The application discloses a virtual scene rendering method, which is characterized in that mirror image data corresponding to data required for rendering a target virtual scene is stored in a storage space of a GPU, so that the GPU can replace a CPU to process the mirror image data, and a rendering instruction of the target virtual scene is generated. By utilizing the parallel computing capability of the GPU to generate the rendering instruction, the CPU does not need to generate the rendering instruction, so that the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved.
Fig. 3 is a flowchart of another virtual scene rendering method provided according to an embodiment of the present application, referring to fig. 3, in the embodiment of the present application, an example of execution by a terminal is described. The virtual scene rendering method comprises the following steps:
301. in response to a rendering request for a target virtual scene, the terminal converts a data structure of target rendering data from a first data structure to a second data structure through the CPU to obtain mirror image data corresponding to the target rendering data, wherein the first data structure is a data structure identifiable by the CPU, the second data structure is a data structure identifiable by the GPU, and the target rendering data is data required for rendering the target virtual scene.
In the embodiment of the application, the terminal is provided with an application program supporting the virtual scene. The application program is used for constructing the virtual scene and rendering the data required by rendering the virtual scene according to a preset rendering pipeline. Accordingly, under the condition that the application program is operated, the terminal can respond to the rendering request of the target virtual scene, and the CPU can convert the data structure of the target rendering data from the first data structure to the second data structure, namely, the data structure of the target rendering data is converted from the data structure recognizable by the CPU to the data structure recognizable by the GPU. The first data structure is a data structure recognizable by the CPU, that is, a data structure suitable for the CPU to work. The second data structure is a data structure recognizable by the GPU, i.e., a data structure suitable for the GPU to operate.
For example, taking the data structure of the target rendering data as an array, since the array is typically long on the CPU rather than fixed length, i.e., the target rendering data is stored on the CPU in a data structure of a long array. However, the array is fixed length on the GPU, i.e., the variable length array is not a data structure suitable for GPU operation. Therefore, in order to enable the terminal to process the target rendering data through the GPU to generate the rendering instruction, the terminal cannot simply copy the target rendering data on the CPU onto the GPU in the process of storing the mirror image data corresponding to the target rendering data onto the GPU, and further needs to convert the data structure of the target rendering data so that the data structure of the target rendering data becomes a data structure identifiable by the GPU.
In the embodiment of the application, the target rendering data is data required for rendering the target virtual scene, and comprises geometric description data and grid instance data of the target virtual scene. Wherein, the grid instance data is used for indicating the object information of the geometric object in the target virtual scene, and the object information can comprise material information, coordinate transformation information, geometric primitive information, bounding boxes and the like of the geometric object. The geometric description data is used to indicate description information of geometric primitives required for rendering the geometric object, the geometric primitives including vertices, line segments, triangular faces, etc., and the description information of the geometric primitives may include position information, normal line information, tangent line information, UV (texture map) information, etc. of the geometric primitives.
302. The terminal stores mirror image data into a storage space of the GPU through the CPU, wherein the mirror image data comprises first mirror image data and second mirror image data, the first mirror image data is mirror image data corresponding to geometric description data in target rendering data, and the second mirror image data is mirror image data corresponding to grid instance data in the target rendering data.
In the embodiment of the application, under the condition that the data structure of the target rendering data is converted into the data structure identifiable by the GPU, the terminal can store the target rendering data with the data structure identifiable by the GPU, namely mirror image data corresponding to the target rendering data, into a storage space of the GPU. Since the target rendering data includes the mesh instance data and the geometry description data of the target virtual scene, the mirror data corresponding to the target rendering data should also include the mirror data corresponding to each of the mesh instance data and the geometry description data, that is, the first mirror data and the second mirror data. The first mirror image data is mirror image data corresponding to the geometric description data. The second mirror image data is mirror image data corresponding to the grid instance data.
For example, fig. 4 is a schematic diagram of a storage process of mirror data according to an embodiment of the present application. As shown in fig. 4, the terminal can store mirror image data corresponding to the description information (geometric description data) of each geometric primitive stored on the CPU on the GPU, so that the GPU can read the description information of each geometric primitive as the CPU. The terminal can also store all renderable instantiation grid objects stored on the CPU, namely mirror image data corresponding to object information of the geometric objects, on the GPU, so that the GPU can read the object information of each geometric object as the CPU.
303. The terminal changes the description mode of the first mirror image data from a first description mode to a second description mode through the GPU, wherein the first description mode is a description mode based on a data function, and the second description mode is a description mode based on a data type.
In the embodiment of the application, the first mirror image data is mirror image data corresponding to geometric description data, and the geometric description data is used for indicating description information of geometric primitives required for rendering, such as position information, normal line information, tangent line information and the like. This information is typically described on the basis of the data function, i.e. the first mirrored data is divided in the memory space of the GPU by the data function. Correspondingly, when the terminal reads the first mirror image data through the GPU, the terminal also needs to read corresponding data from the storage space of the GPU according to the data function corresponding to each data in the first mirror image data. Because the data functions of the multiple geometric description data in the first mirror image data are different, the process of reading the geometric description data with different functions by the terminal is complicated. Therefore, the terminal can change the description mode of the first mirror image data from the first description mode to the second description mode through the GPU, namely, change the description mode of the first mirror image data from the description mode based on the data function to the description mode based on the data type. The first description mode is a description mode based on a data function. The second description mode is a description mode based on data types, and the data types can include integer types, floating point number types and the like. When the terminal reads the geometric description data in the first mirror image data, the terminal can read the data corresponding to the data type according to the data type of the data, so that the geometric description data with different data functions have uniform and standard access modes.
304. The terminal encodes and classifies the first mirror image data and the second mirror image data based on a buffer area where a geometric object in a target virtual scene in the second mirror image data is located, material information of the geometric object and primitive format information of the geometric object in the first mirror image data through the GPU to obtain at least one object set, the buffer area is used for indicating a storage position of the object information of the geometric object indicated by the second mirror image data, the primitive format information is used for indicating a data type of a geometric primitive required for rendering the geometric object, and the material information, the primitive format information and the buffer area where the at least one geometric object included in the object set are identical.
In the embodiment of the application, in the storage space of the GPU, the terminal can allocate a plurality of buffers for the second image data and place the second image data in the plurality of buffers. Due to the limited storage space of each buffer, it is typically 64MB (megabyte) or 128MB. Therefore, the terminal can divide the second mirror data into a plurality of pieces and store the plurality of pieces of data in the plurality of buffers, respectively. Since the target virtual scene includes a plurality of geometric objects, and the object information of at least one geometric object is stored in any buffer area where the second mirror image data is located, the buffers where the plurality of geometric objects are located may be the same or different. Because the materials used by part of the geometric objects in the target virtual scene may be the same material, the corresponding material information in the geometric objects may be the same or different. Since the description information of the geometric primitives required for rendering the geometric objects indicated by the first mirror data is described based on the data type, the data types of the description information of the geometric primitives corresponding to each of the plurality of geometric objects may be the same or different. Correspondingly, the terminal can encode the first mirror image data and the second mirror image data through the GPU based on a buffer area where the geometric objects in the target virtual scene are located, material information of the geometric objects and primitive format information of the geometric objects to obtain encoding information corresponding to each of the geometric objects. The encoded information is used to uniquely indicate the material of the geometric object, the buffer in which it is located, and the type of data of the geometric primitive that is required to render the geometric object. The terminal can classify a plurality of geometric objects of the target virtual scene based on the coding information to obtain at least one object set. Wherein, the material information, the primitive format information and the buffer region of the geometric object included in any one of the at least one object set are the same. Primitive format information is used to indicate the data type, such as integer type or floating point type, of the description information of the geometric primitives required to render the geometric object.
In some embodiments, the process of encoding and classifying the first mirror data and the second mirror data by the terminal based on the buffer in which the geometric object is located, the object information of the geometric object, and the primitive format information includes the following steps (1) - (2).
(1) And the terminal encodes the first mirror image data and the second mirror image data based on the buffer area where the geometric object is located, the material information of the geometric object and the primitive format information of the geometric object through the GPU, so as to obtain encoding information corresponding to the geometric object. The coding information comprises buffer coding, material coding and primitive format coding of the geometric objects. The terminal can perform three-level coding on mirror image data corresponding to the target rendering data through the GPU to obtain coding information corresponding to each of a plurality of geometric objects in the target virtual scene. In the encoding process, the first-stage encoding is buffer encoding, that is, the terminal can encode the buffer where the object information of the geometric object in the second mirror image data is located, so as to obtain the buffer encoding of the geometric object. The buffer coding is used for indicating the buffer where the geometric object is located. The second-stage code is a texture code, that is, the terminal can code the texture information of the geometric object in the second mirror image data to obtain the texture code of the geometric object. The material code is used for indicating the material corresponding to the geometric object. The third level of encoding is primitive format encoding, that is, the terminal can encode primitive format information of the geometric object in the first mirror image data to obtain primitive format encoding of the geometric object. The primitive format encodes a data type for descriptive information indicating geometric primitives required to render the geometric object.
(2) The terminal classifies the geometric objects in the target virtual scene based on the coding information of the geometric objects to obtain at least one object set, wherein the buffer area coding, the material coding and the primitive format coding of at least one geometric object included in the object set are the same. The coding information comprises buffer coding, material coding and primitive format coding of the geometric objects. Therefore, the terminal can classify the plurality of geometric objects in the target virtual scene based on the buffer coding, the material coding and the primitive format coding. Optionally, for any two geometric objects in the target virtual scene, the terminal can first determine whether the buffer codes of the two geometric objects are consistent, and if so, the buffer codes of the two geometric objects are identical. Then, under the condition that the buffer codes are consistent, the terminal can determine whether the material codes of the two geometric objects are consistent, and if so, the materials corresponding to the two geometric objects are the same. Finally, under the condition that the buffer area codes and the material codes are the same, the terminal can determine whether the primitive format codes of the two geometric objects are consistent, and if so, the data types of the geometric primitive description information corresponding to the two geometric objects are the same. Thus, the terminal is able to divide two geometric objects into the same set of objects. By encoding the mirror image data, the terminal can classify a plurality of geometric objects in the target virtual scene based on buffer area encoding, material encoding and primitive format encoding corresponding to each geometric object, and then can pack rendering instructions corresponding to at least one geometric object belonging to the same class into one rendering instruction to render the target virtual scene, so that the rendering times of the target virtual scene are reduced, and the rendering efficiency of the target virtual scene is improved.
305. And the terminal performs visibility test and shielding elimination on the geometric objects in the target virtual scene through the GPU to obtain a visibility list, wherein the visibility list comprises object identifiers of the geometric objects visible in the target virtual scene.
In an embodiment of the present application, the target virtual scene includes a plurality of geometric objects, but not all geometric objects need to be rendered. The terminal only needs to render the geometric objects that the user can see. Correspondingly, the terminal can also perform visibility test and shielding elimination on the geometric objects in the target virtual scene through the GPU to obtain a visibility list. Wherein the visibility list includes object identifications of geometric objects visible in the target virtual scene. The terminal can determine the number of geometric objects to be rendered in the target virtual scene by performing visibility test and shielding elimination on the geometric objects in the target virtual scene on the GPU, and avoids rendering the geometric objects which are completely blocked by other geometric objects but are invisible in the target virtual scene.
In some embodiments, the process of performing visibility test and occlusion culling on the geometric objects in the target virtual scene by the terminal includes: the terminal performs visibility test on the geometric objects in the target virtual scene through the GPU to obtain the visibility of the geometric objects in the target virtual scene, wherein the visibility is used for indicating the visibility degree of the geometric objects; based on the visibility of the geometric objects in the target virtual scene, the geometric objects in the target virtual scene are shielded and removed through the GPU, so that the geometric objects which are completely shielded in the target virtual scene are removed; and generating a visibility list based on the geometrical objects in the target virtual scene after the elimination. The terminal can determine the visibility degree of each geometric object in the target virtual scene by performing a visibility test on the geometric objects in the target virtual scene. And the terminal can reject the geometric objects which are invisible due to the shielding of other geometric objects from all geometric objects in the target virtual scene according to the visibility degree of each geometric object. And finally, the terminal can generate the visibility list based on the object identification of the geometric objects in the rejected target virtual scene. The visibility test and the shielding elimination are carried out on the geometric objects in the target virtual scene on the GPU, so that the visibility degree of the geometric objects in the target virtual scene can be determined, the terminal is prevented from rendering the geometric objects which are completely blocked by other geometric objects but are invisible in the target virtual scene, unnecessary rendering expenditure is reduced, and the CPU workload is reduced while the rendering efficiency of the target virtual scene is improved.
Optionally, when the terminal renders the initial frame of the target virtual scene, the terminal can utilize the depth map rendered by the previous frame to perform occlusion rejection on all geometric objects in the target virtual scene by using an HIZ algorithm (an occlusion rejection algorithm running on the GPU), and generate a corresponding visibility list. In the process of shielding and eliminating through the HIZ algorithm, the terminal calculates the visibility of each geometric object in the target virtual scene, eliminates the geometric objects which are completely blocked based on the visibility, and finally generates a visibility list.
306. The terminal generates at least one rendering instruction based on the visibility list and at least one object set through the GPU, wherein the at least one rendering instruction corresponds to the at least one object set one by one, and the rendering instruction is used for indicating the geometric objects visible in the rendering object set.
In an embodiment of the application, the visibility list comprises object identifications of geometric objects visible in the target virtual scene. The terminal can determine geometric objects to be rendered in the target virtual scene based on the visibility list. Any one of the at least one object set comprises at least one geometric object with the same buffer area, corresponding material information and corresponding primitive format information. The terminal can combine rendering instructions corresponding to the geometric objects in the same object set based on at least one object set to obtain a new rendering instruction. Since at least one object set includes a geometric object, a part of which does not need to be rendered, for any object set, the terminal can generate, through the GPU, a rendering instruction corresponding to the object set based on the visibility list and the geometric object included in the object set. Wherein the rendering instructions are for instructing rendering of geometric objects visible in the set of objects. By generating at least one rendering instruction corresponding to at least one object set on the GPU based on the visibility list and the at least one object set, the workload of a CPU can be reduced, the rendering instructions corresponding to all visible geometric objects in the same object set can be packaged into one rendering instruction to render the target virtual scene, so that the rendering times of the target virtual scene are reduced, and the rendering efficiency of the target virtual scene is improved.
In some embodiments, the process of generating rendering instructions corresponding to any object set by the terminal based on the visibility list includes: for any one of the at least one object set, the terminal determines, by the GPU, at least one visible geometric object from the object set based on the visibility list; then, the terminal generates rendering instructions corresponding to the object sets based on at least one visible geometric object. The terminal can determine at least one visible geometric object from the object set based on the object identification of the visible geometric object in the visibility list by the GPU before generating the rendering instruction corresponding to any object set because the target virtual scene comprises the geometric objects which are not required to be rendered due to being blocked by other geometric objects, and further can generate the rendering instruction for rendering the visible geometric object in the object set based on the at least one visible geometric object. Before a rendering instruction for rendering the geometric objects in the object set is generated, the geometric objects invisible in the object set are removed based on the visibility list, so that the terminal can be prevented from generating the rendering instruction for rendering the geometric objects invisible by other geometric objects in the target virtual scene, unnecessary rendering overhead is reduced, and rendering efficiency is improved.
307. The terminal reads a visible material set from the GPU through the CPU, wherein the visible material set comprises at least one target material identifier, the at least one target material identifier is an identifier of a material required for rendering a geometric object visible in a target virtual scene, and the at least one target material identifier corresponds to the at least one rendering instruction one by one.
In the embodiment of the application, when the terminal generates at least one rendering instruction on the GPU, the terminal can also generate a visible material set comprising at least one target material identifier. The terminal can read the visible material set from the GPU through the CPU. Wherein the at least one target material identifier is an identifier of a material required to render a geometric object visible in the target virtual scene. Because any rendering instruction is used for rendering the geometric objects visible in the object set corresponding to the rendering instruction, the object set corresponding to the rendering instruction comprises the geometric objects with the same material information. Therefore, the identification of the material corresponding to the visible geometric object in any object set is the target material identification in the visible material set. Because at least one rendering instruction is in one-to-one correspondence with at least one object set, and at least one object set is in one-to-one correspondence with at least one target material identifier, the at least one target material identifier is in one-to-one correspondence with the at least one rendering instruction.
308. And the terminal sends a rendering instruction corresponding to the target material identifier in the at least one rendering instruction to the GPU based on any target material identifier in the visible material set through the CPU so as to instruct the GPU to render at least one geometric object in the target virtual scene based on the material corresponding to the target material identifier.
In the embodiment of the application, after the visible material set is read from the GPU through the CPU, as at least one target material identifier in the visible material set corresponds to at least one rendering instruction one by one, the terminal can call the rendering instruction corresponding to the target material identifier from the rendering instruction buffer based on any target material identifier in the visible material set through the CPU so as to command the GPU to render the geometric object in the object set corresponding to the rendering instruction in the target virtual scene based on the material corresponding to the target material identifier.
For example, fig. 5 is a schematic diagram of a rendering process of a virtual scene according to an embodiment of the present application. As shown in fig. 5, at the GPU side, the terminal can generate an occlusion map of the target virtual scene by using the depth map that is completely rendered in the previous frame when rendering the start frame of the target virtual scene, and then, perform occlusion rejection on all geometric objects in the target virtual scene on the GPU by using an occlusion rejection algorithm, and generate a corresponding visibility list. In the process of shielding and eliminating, the terminal calculates the visibility of each geometric object in the target virtual scene, eliminates the geometric objects which are completely blocked based on the visibility, and finally generates a visibility list. Finally, the terminal can generate a rendering instruction set based on the visibility list and at least one object set obtained by encoding and classifying the mirror data. The set of rendering instructions includes at least one rendering instruction. At the CPU end, the terminal can initiate a rendering instruction corresponding to the material identification by taking the material identification as a unit based on the identification set of the material corresponding to all the visible geometric objects read from the GPU end so as to instruct the GPU to render the target virtual scene.
The application discloses a virtual scene rendering method, which is characterized in that mirror image data corresponding to data required for rendering a target virtual scene is stored in a storage space of a GPU, so that the GPU can replace a CPU to process the mirror image data, and a rendering instruction of the target virtual scene is generated. By utilizing the parallel computing capability of the GPU to generate the rendering instruction, the CPU does not need to generate the rendering instruction, so that the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved.
Fig. 6 is a block diagram of a virtual scene rendering apparatus according to an embodiment of the present application. The apparatus is used for executing the steps when the virtual scene rendering method is executed, referring to fig. 6, the virtual scene rendering apparatus includes: a storage module 601, a generation module 602, and a rendering module 603.
The storage module 601 is configured to store, by the CPU, mirror image data corresponding to target rendering data, where the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data having a data structure identifiable by the GPU, in response to a rendering request for the target virtual scene;
The generating module 602 is configured to generate, by the GPU, at least one rendering instruction of the target virtual scene based on the mirror image data, where the at least one rendering instruction is used to indicate a rendering mode of the target virtual scene;
the rendering module 603 is configured to render, by the GPU, the target virtual scene based on a rendering manner indicated by the at least one rendering instruction.
In some embodiments, fig. 7 is a block diagram of another virtual scene rendering apparatus provided according to an embodiment of the present application. Referring to fig. 7, the memory module 601 includes:
The structure conversion unit 701 is configured to convert, by the CPU, a data structure of the target rendering data from a first data structure to a second data structure in response to a rendering request for the target virtual scene, so as to obtain mirror image data corresponding to the target rendering data, where the first data structure is a data structure identifiable by the CPU, and the second data structure is a data structure identifiable by the GPU;
The data storage unit 702 is configured to store, by the CPU, the mirrored data into the storage space of the GPU.
In some embodiments, the mirror data includes first mirror data and second mirror data, the first mirror data is mirror data corresponding to geometric description data in the target rendering data, the second mirror data is mirror data corresponding to grid instance data in the target rendering data, the grid instance data is used for indicating object information of geometric objects in the target virtual scene, and the geometric description data is used for indicating description information of geometric primitives required for rendering the geometric objects;
with continued reference to fig. 7, the generating module 602 includes:
A changing unit 703, configured to change, by the GPU, a description manner of the first image data from a first description manner to a second description manner, where the first description manner is a description manner based on a data function, and the second description manner is a description manner based on a data type;
The first processing unit 704 is configured to encode and classify, by using the GPU, the first mirror image data and the second mirror image data based on a buffer area where a geometric object in the target virtual scene in the second mirror image data is located, material information of the geometric object, and primitive format information of the geometric object in the first mirror image data, to obtain at least one object set, where the buffer area is used to indicate a storage location of object information of the geometric object indicated by the second mirror image data, and the primitive format information is used to indicate a data type of a geometric primitive required for rendering the geometric object, where the material information, the primitive format information, and the buffer area where the at least one geometric object included in the object set is located are all the same;
The second processing unit 705 is configured to perform, by using the GPU, a visibility test and occlusion rejection on the geometric objects in the target virtual scene, to obtain a visibility list, where the visibility list includes object identifiers of the geometric objects visible in the target virtual scene;
the generating unit 706 is configured to generate, by the GPU, at least one rendering instruction based on the visibility list and the at least one object set, where the at least one rendering instruction corresponds to the at least one object set one-to-one, and the rendering instruction is configured to indicate a geometric object visible in the rendered object set.
In some embodiments, the first processing unit 704 is configured to encode, by using the GPU, the first mirror image data and the second mirror image data based on a buffer area where the geometric object is located, material information of the geometric object, and primitive format information of the geometric object, to obtain encoded information corresponding to the geometric object, where the encoded information includes buffer area encoding, material encoding, and primitive format encoding of the geometric object; and classifying the geometric objects in the target virtual scene based on the coding information of the geometric objects to obtain at least one object set, wherein the buffer area coding, the material coding and the primitive format coding of at least one geometric object included in the object set are the same.
In some embodiments, the generating unit 706 is configured to determine, for any one of the at least one object set, by the GPU, at least one visible geometric object from the object set based on the visibility list; based on the at least one visible geometric object, rendering instructions corresponding to the object set are generated.
In some embodiments, the second processing unit 705 is configured to perform, by using the GPU, a visibility test on the geometric object in the target virtual scene, to obtain a visibility of the geometric object in the target virtual scene, where the visibility is used to indicate a visibility degree of the geometric object; based on the visibility of the geometric objects in the target virtual scene, the geometric objects in the target virtual scene are shielded and removed through the GPU, so that the geometric objects which are completely shielded in the target virtual scene are removed; and generating a visibility list based on the geometrical objects in the target virtual scene after the elimination.
In some embodiments, with continued reference to fig. 7, the apparatus further comprises:
The reading module 604 is configured to read, by the CPU, a visible material set from the GPU, where the visible material set includes at least one target material identifier, where the at least one target material identifier is an identifier of a material required for rendering a geometric object visible in the target virtual scene, and the at least one target material identifier corresponds to the at least one rendering instruction one to one;
the sending module 605 is configured to send, by the CPU, a rendering instruction corresponding to the target material identifier in the at least one rendering instruction to the GPU based on any target material identifier in the visible material set, so as to instruct the GPU to render at least one geometric object in the target virtual scene based on a material corresponding to the target material identifier.
The embodiment of the application provides a virtual scene rendering device, which stores mirror image data corresponding to data required for rendering a target virtual scene into a storage space of a GPU, so that the GPU can replace a CPU to process the mirror image data, thereby generating a rendering instruction of the target virtual scene. By utilizing the parallel computing capability of the GPU to generate the rendering instruction, the CPU does not need to generate the rendering instruction, so that the workload and performance bottleneck of the CPU are reduced, and the rendering efficiency is improved.
It should be noted that, when the rendering device for a virtual scene provided in the foregoing embodiment performs data transmission, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus for rendering a virtual scene provided in the foregoing embodiments and the method embodiment for rendering a virtual scene belong to the same concept, and detailed implementation processes of the apparatus and the method embodiment are detailed and will not be described herein.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 800 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 801 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one computer program for execution by processor 801 to implement the method of rendering a virtual scene provided by embodiments of the method of the present application.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a display 805, a camera assembly 806, audio circuitry 807, and a power supply 808.
Peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one and disposed on a front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 806 is used to capture images or video. In some embodiments, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 804 for voice communication. For stereo acquisition or noise reduction purposes, a plurality of microphones may be respectively disposed at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The power supply 808 is used to power the various components in the terminal 800. The power supply 808 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 808 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 800 also includes one or more sensors 809. The one or more sensors 809 include, but are not limited to: acceleration sensor 810, gyro sensor 811, pressure sensor 812, optical sensor 813, and proximity sensor 814.
The acceleration sensor 810 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 800. For example, the acceleration sensor 810 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the display screen 805 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 810. Acceleration sensor 810 may also be used for the acquisition of motion data for a game or user.
The gyro sensor 811 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 811 may collect a 3D motion of the user on the terminal 800 in cooperation with the acceleration sensor 810. The processor 801 may implement the following functions based on the data collected by the gyro sensor 811: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 812 may be disposed on a side frame of terminal 800 and/or below display 805. When the pressure sensor 812 is disposed on a side frame of the terminal 800, a grip signal of the user on the terminal 800 may be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 812. When the pressure sensor 812 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 813 is used to collect the intensity of the ambient light. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the intensity of ambient light collected by the optical sensor 813. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 813.
A proximity sensor 814, also referred to as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 814 is used to collect a distance between a user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 814 detects that the distance between the user and the front face of the terminal 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 814 detects that the distance between the user and the front surface of the terminal 800 gradually increases, the processor 801 controls the display 805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 900 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 901 and one or more memories 902, where at least one computer program is stored in the memories 902, and the at least one computer program is loaded and executed by the processor 901 to implement the virtual scene rendering method provided in the above method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The embodiment of the application also provides a computer readable storage medium, in which at least one section of computer program is stored, and the at least one section of computer program is loaded and executed by a processor of a computer device to implement the operations performed by the computer device in the virtual scene rendering method of the above embodiment. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the computer device performs the rendering method of the virtual scene provided in the above-described various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (10)

1. A method of rendering a virtual scene, characterized by being applied to a computer device comprising a central processing unit CPU and a graphics processing unit GPU, the method comprising:
Responding to a rendering request of a target virtual scene, storing mirror image data corresponding to target rendering data into a storage space of the GPU by the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure identifiable by the GPU;
Generating, by the GPU, at least one rendering instruction of the target virtual scene based on the mirrored data, the at least one rendering instruction being configured to indicate a rendering manner of the target virtual scene;
And rendering the target virtual scene by the GPU based on the rendering mode indicated by the at least one rendering instruction.
2. The method according to claim 1, wherein the storing, by the CPU, mirror data corresponding to target rendering data into a storage space of the GPU in response to a rendering request for a target virtual scene, includes:
Responding to a rendering request of a target virtual scene, and converting a data structure of target rendering data from a first data structure to a second data structure through the CPU to obtain mirror image data corresponding to the target rendering data, wherein the first data structure is a data structure identifiable by the CPU, and the second data structure is a data structure identifiable by the GPU;
and storing the mirror image data into a storage space of the GPU through the CPU.
3. The method according to claim 1, wherein the mirror data includes first mirror data and second mirror data, the first mirror data is mirror data corresponding to geometric description data in the target rendering data, the second mirror data is mirror data corresponding to grid instance data in the target rendering data, the grid instance data is used for indicating object information of a geometric object in the target virtual scene, and the geometric description data is used for indicating description information of geometric primitives required for rendering the geometric object;
the generating, by the GPU, at least one rendering instruction of the target virtual scene based on the mirrored data, includes:
changing the description mode of the first mirror image data from a first description mode to a second description mode through the GPU, wherein the first description mode is a description mode based on a data function, and the second description mode is a description mode based on a data type;
Encoding and classifying the first mirror image data and the second mirror image data based on a buffer area where a geometric object in the target virtual scene is located in the second mirror image data, material information of the geometric object and primitive format information of the geometric object in the first mirror image data to obtain at least one object set, wherein the buffer area is used for indicating a storage position of object information of the geometric object indicated by the second mirror image data, the primitive format information is used for indicating a data type of a geometric primitive required for rendering the geometric object, and the material information, the primitive format information and the buffer area where the at least one geometric object is located are the same;
performing visibility test and shielding elimination on the geometric objects in the target virtual scene through the GPU to obtain a visibility list, wherein the visibility list comprises object identifiers of the geometric objects visible in the target virtual scene;
Generating, by the GPU, the at least one rendering instruction based on the visibility list and the at least one object set, the at least one rendering instruction corresponding one-to-one to the at least one object set, the rendering instruction being configured to instruct rendering of geometric objects visible in the object set.
4. The method according to claim 3, wherein the encoding and classifying, by the GPU, the first image data and the second image data based on a buffer region where a geometric object in the target virtual scene is located in the second image data, material information of the geometric object, and primitive format information of the geometric object in the first image data, to obtain at least one object set includes:
Encoding the first mirror image data and the second mirror image data based on a buffer area where the geometric object is located, material information of the geometric object and primitive format information of the geometric object by the GPU to obtain encoding information corresponding to the geometric object, wherein the encoding information comprises buffer area encoding, material encoding and primitive format encoding of the geometric object;
And classifying the geometric objects in the target virtual scene based on the coding information of the geometric objects to obtain the at least one object set, wherein the buffer area coding, the material coding and the primitive format coding of at least one geometric object included in the object set are the same.
5. The method of claim 3, wherein the generating, by the GPU, the at least one rendering instruction based on the visibility list and the at least one object set comprises:
For any one of the at least one set of objects, determining, by the GPU, at least one visible geometric object from the set of objects based on the visibility list;
and generating rendering instructions corresponding to the object set based on the at least one visible geometric object.
6. The method according to claim 3, wherein the performing, by the GPU, the visibility test and occlusion culling on the geometric objects in the target virtual scene to obtain a visibility list includes:
Performing visibility test on the geometric objects in the target virtual scene through the GPU to obtain the visibility of the geometric objects in the target virtual scene, wherein the visibility is used for indicating the visibility degree of the geometric objects;
Based on the visibility of the geometric objects in the target virtual scene, the GPU performs shielding elimination on the geometric objects in the target virtual scene to eliminate the geometric objects which are completely shielded in the target virtual scene;
and generating the visibility list based on the geometrical objects in the target virtual scene after being eliminated.
7. The method according to claim 1, wherein the method further comprises:
reading, by the CPU, a visible material set from the GPU, where the visible material set includes at least one target material identifier, where the at least one target material identifier is an identifier of a material required to render a geometric object visible in the target virtual scene, and the at least one target material identifier corresponds to the at least one rendering instruction one by one;
and sending, by the CPU, a rendering instruction corresponding to the target material identifier in the at least one rendering instruction to the GPU based on any target material identifier in the visible material set, so as to instruct the GPU to render at least one geometric object in the target virtual scene based on a material corresponding to the target material identifier.
8. A virtual scene rendering apparatus, configured in a computer device, the computer device including a central processing unit CPU and a graphics processing unit GPU, the apparatus comprising:
The storage module is used for responding to a rendering request of a target virtual scene, and storing mirror image data corresponding to target rendering data into a storage space of the GPU through the CPU, wherein the target rendering data is data required for rendering the target virtual scene, and the mirror image data is target rendering data with a data structure identifiable by the GPU;
the generating module is used for generating at least one rendering instruction of the target virtual scene based on the mirror image data through the GPU, wherein the at least one rendering instruction is used for indicating the rendering mode of the target virtual scene;
And the rendering module is used for rendering the target virtual scene through the GPU based on the rendering mode indicated by the at least one rendering instruction.
9. A computer device, characterized in that it comprises a processor and a memory for storing at least one piece of computer program, which is loaded by the processor and which performs the method of rendering a virtual scene according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium is for storing at least one segment of a computer program for executing the virtual scene rendering method of any of claims 1 to 7.
CN202410316302.3A 2024-03-19 2024-03-19 Virtual scene rendering method and device, computer equipment and storage medium Pending CN118172465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410316302.3A CN118172465A (en) 2024-03-19 2024-03-19 Virtual scene rendering method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410316302.3A CN118172465A (en) 2024-03-19 2024-03-19 Virtual scene rendering method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118172465A true CN118172465A (en) 2024-06-11

Family

ID=91356130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410316302.3A Pending CN118172465A (en) 2024-03-19 2024-03-19 Virtual scene rendering method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118172465A (en)

Similar Documents

Publication Publication Date Title
CN110276840B (en) Multi-virtual-role control method, device, equipment and storage medium
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN112245926B (en) Virtual terrain rendering method, device, equipment and medium
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
CN109948581B (en) Image-text rendering method, device, equipment and readable storage medium
CN111603772B (en) Area detection method, device, equipment and storage medium
CN112907716B (en) Cloud rendering method, device, equipment and storage medium in virtual environment
CN112306332B (en) Method, device and equipment for determining selected target and storage medium
CN111275607B (en) Interface display method and device, computer equipment and storage medium
CN117635799B (en) Rendering method, device, electronic device and storage medium of three-dimensional model
CN112308103B (en) Method and device for generating training samples
CN113658283B (en) Image processing method, device, electronic equipment and storage medium
CN113032590B (en) Special effect display method, device, computer equipment and computer readable storage medium
CN112560435B (en) Text corpus processing method, device, equipment and storage medium
CN112257594B (en) Method and device for displaying multimedia data, computer equipment and storage medium
CN116402880B (en) Method, device, equipment and storage medium for determining oil-containing area
CN113018865B (en) Climbing line generation method and device, computer equipment and storage medium
CN113058266B (en) Method, device, equipment and medium for displaying scene fonts in virtual environment
CN112717393B (en) Virtual object display method, device, equipment and storage medium in virtual scene
CN118172465A (en) Virtual scene rendering method and device, computer equipment and storage medium
CN116828207A (en) Image processing method, device, computer equipment and storage medium
CN111339735B (en) Character string length calculating method and device and computer storage medium
CN116993946A (en) Model generation method, device, terminal and storage medium
CN112817768B (en) Animation processing method, device, equipment and computer readable storage medium
CN110201392B (en) User interface generation method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination