[go: up one dir, main page]

CN113457161B - Picture display method, information generation method, device, equipment and storage medium - Google Patents

Picture display method, information generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113457161B
CN113457161B CN202110805394.8A CN202110805394A CN113457161B CN 113457161 B CN113457161 B CN 113457161B CN 202110805394 A CN202110805394 A CN 202110805394A CN 113457161 B CN113457161 B CN 113457161B
Authority
CN
China
Prior art keywords
model
scene
probability
polygon
view angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110805394.8A
Other languages
Chinese (zh)
Other versions
CN113457161A (en
Inventor
王钦佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202110805394.8A priority Critical patent/CN113457161B/en
Publication of CN113457161A publication Critical patent/CN113457161A/en
Application granted granted Critical
Publication of CN113457161B publication Critical patent/CN113457161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a picture display method, an information generation method, a device, equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: acquiring a target visual angle; responding to the target view belongs to a high-probability view set, and acquiring model visibility information corresponding to the high-probability view set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not shielded under the high-probability view angle set; submitting rendering data of the model part indicated by the visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data of the model portion indicated by the visibility information; and displaying a scene picture of the virtual scene. The scheme can improve the rendering efficiency of the virtual scene.

Description

Picture display method, information generation method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to a picture display method, an information generation method, a device, equipment and a storage medium.
Background
The visual presentation of a virtual scene is typically achieved by rendering objects in the virtual scene.
In a three-dimensional virtual scene, the situation that scene objects are blocked mutually usually exists, in order to reduce the workload of rendering and improve the rendering efficiency, a rendering component in the virtual scene display device firstly performs vertex coloring on each scene object in the virtual scene in the process of rendering the three-dimensional virtual scene, and then according to the blocking situation among the scene objects, the blocked model part in each scene model is not executed any more in the subsequent rendering process.
However, in the above solution, vertex coloring is still required for the blocked model part in each scene model, which affects the rendering efficiency of the virtual scene.
Disclosure of Invention
The embodiment of the application provides a picture display method, an information generation method, a device, equipment and a storage medium, which can improve the rendering efficiency of virtual scenes. The technical scheme is as follows:
in one aspect, a method for displaying a picture is provided, the method comprising:
acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
Responding to the target view belongs to the high-probability view set, and acquiring model visibility information corresponding to the high-probability view set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not blocked under the high-probability view angle set;
submitting rendering data of the model part indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component;
and displaying the scene picture of the virtual scene.
In one aspect, there is provided an information generating method, the method including:
acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
determining visible model part indication information based on the high-probability view angle set, wherein the visible model part indication information is used for indicating model parts of each scene model in the virtual scene, which are not shielded under the high-probability view angle set;
generating model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the virtual scene display equipment to submit rendering data of a model part indicated by the model visibility information to a rendering component when a target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
In another aspect, there is provided a picture display device, the device comprising:
the visual angle acquisition module is used for acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
the visibility information acquisition module is used for responding to the target view belongs to the high-probability view set and acquiring model visibility information corresponding to the high-probability view set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not blocked under the high-probability view angle set;
a rendering module for submitting rendering data of the model part indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component;
and the display module is used for displaying the scene picture of the virtual scene.
In one possible implementation, the scene model is composed of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
In one possible implementation, the polygon visible information includes an index interval of polygons in the non-occluded model portion;
the vertex visibility information includes index intervals of polygon vertices in the non-occluded model portion.
In one possible implementation, the apparatus further includes:
and the picture rendering module is used for responding to the target view angle belonging to the high-probability view angle set and rendering a scene picture of the virtual scene based on the scene model of the virtual scene.
In another aspect, there is provided an information generating apparatus including:
the view angle set acquisition module is used for acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
an indication information acquisition module, configured to determine, based on the high-probability view angle set, visible model part indication information, where the visible model part indication information is used to indicate model parts of each scene model in the virtual scene that are not occluded under the high-probability view angle set;
the visibility information generation module is used for generating model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the virtual scene display equipment to submit rendering data of a model part indicated by the model visibility information to a rendering component when a target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
In one possible implementation, the scene model is composed of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
In one possible implementation manner, the indication information obtaining module includes:
the array acquisition submodule is used for acquiring polygon visibility arrays of each scene model under the high-probability view angle set and taking the polygon visibility arrays as the visible model part indication information; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set respectively.
In one possible implementation manner, the polygon visibility array includes values corresponding to polygons in the scene models respectively;
the array acquisition sub-module comprises:
a polygon acquisition unit configured to acquire a target polygon, where the target polygon is a polygon that is in a visible state at a first camera view angle among the polygons included in the target scene model; the target scene model is a scene model which is blocked under the first camera view angle in each scene model; the first camera view is any one camera view in the high probability view set;
And the numerical value setting unit is used for setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In one possible implementation, the apparatus further includes:
the model screening unit is used for screening a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models before the target polygon is acquired;
and the target determining unit is used for determining a scene model which is blocked by the first type scene model under the first camera view angle from the second type scene model as the target scene model.
In one possible implementation, the polygon acquiring unit is configured to, in use,
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertexes of the polygons based on the numbers of the vertexes of the polygons;
performing vertex coloring rendering on the target scene model based on the first camera view angle to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of all polygons based on color values on all pixel points in the vertex coloring rendering image;
The target polygon is acquired based on visible vertices among the vertices of the respective polygons.
In one possible implementation manner, the visibility information generating module includes:
the sequencing sub-module is used for sequencing the polygons of each scene model based on the polygon visibility array; the polygons visible under the high probability view angle set are continuous in the polygons of each scene model after sequencing;
the first information acquisition sub-module is used for acquiring the polygonal visible information of the model part which is not shielded based on the polygonal index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of each scene model after sequencing; the polygon visible information includes index intervals of polygons in the model portion that is not occluded;
the second information acquisition sub-module is used for acquiring vertex visible information of the model part which is not shielded based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertices in the ordered polygons of each scene model; the vertex visual information comprises index intervals of polygon vertexes in the model part which is not blocked;
And the visibility information generation sub-module is used for acquiring the polygonal visibility information of the non-occluded model part and the vertex visibility information of the non-occluded model part as the model visibility information corresponding to the high-probability view angle set.
In another aspect, embodiments of the present application provide a computer device including a processor and a memory having at least one computer program stored therein, the at least one computer program being loaded and executed by the processor to implement a method as described in the above aspects.
In another aspect, embodiments of the present application provide a computer readable storage medium having stored therein at least one computer program that is loaded and executed by a processor to implement a method as described in the above aspects.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method described in the above aspect.
The beneficial effects that technical scheme that this application embodiment provided include at least:
in the virtual scene, when the target view angle of a user is in the high-probability view angle set, rendering is carried out on the model part which is not shielded in the high-probability view angle set, rendering is not needed to be submitted to the shielded model part, and correspondingly, vertex coloring is not needed to be carried out on the shielded model part, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an architecture diagram of a virtual scenario development and demonstration system provided in an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for generating information and displaying images in a virtual scene according to an exemplary embodiment of the present application;
FIG. 3 is a schematic illustration of a pre-calculation parameter setting interface involved in the embodiment of FIG. 2;
FIG. 4 is a schematic illustration of an engine menu bar according to the embodiment of FIG. 2;
FIG. 5 is a schematic diagram of a debug window involved in the embodiment shown in FIG. 2;
FIG. 6 is a culling control interface related to the embodiment of FIG. 2;
FIG. 7 is a schematic diagram of a visualization of a vertex/triangle array of a model in accordance with the embodiment of FIG. 2;
FIG. 8 is a flow chart of a pre-calculation implementation of the system involved in the embodiment shown in FIG. 2;
FIG. 9 is a flow chart of the output of a triangle visible set in accordance with the embodiment of FIG. 2;
FIG. 10 is a visual representation of information relating to the embodiment shown in FIG. 2;
FIG. 11 is a vertex visibility output schematic diagram relating to the embodiment shown in FIG. 2;
FIG. 12 is a schematic diagram of a visual set of visualizations involved in the embodiment shown in FIG. 2;
FIG. 13 is a schematic diagram of a new array of vertices involved in the embodiment of FIG. 2;
FIG. 14 is a schematic diagram of a rearranged triangle array in accordance with the embodiment of FIG. 2;
FIG. 15 is a diagram of the remapped vertex indices involved in the embodiment of FIG. 2;
FIG. 16 is a schematic diagram of a virtual scene runtime implementation process involved in the embodiment of FIG. 2;
FIG. 17 is a pre-culling scene preview of the embodiment of FIG. 2;
FIG. 18 is a preview of a scene after culling in accordance with the embodiment of FIG. 2;
FIG. 19 is a block diagram of a display device according to an embodiment of the present disclosure;
fig. 20 is a block diagram of the information generating apparatus provided in an embodiment of the present application;
fig. 21 is a block diagram of a computer device according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Referring to fig. 1, an architecture diagram of a virtual scenario development and exhibition system according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the virtual scene development and exhibition system includes a development end device 110 and a virtual scene exhibition device 120.
The development terminal device 110 may be a computer device corresponding to a developer/operator of the virtual scenario.
After the development of the virtual scene is completed, data related to the rendering of the virtual scene may be stored or updated into the virtual scene presentation apparatus 120.
The virtual scene presentation device 120 is a computer device that runs an application program corresponding to a virtual scene. Wherein, when the virtual scene presentation apparatus 120 is a user terminal, the application program may be a client program; when the virtual scene presentation device 120 is a server, the application may be a server/cloud program.
The virtual scene refers to a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene can be a simulation environment scene of a real world, a half-simulation half-fictional three-dimensional environment scene, or a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are exemplified by the virtual scene being a three-dimensional virtual scene, but are not limited thereto.
For a three-dimensional virtual scene, to improve a better use experience, a user is generally allowed to adjust the viewing angle of the user for viewing the virtual scene in a larger range. However, in many virtual scenes (such as a self-propelled chess game scene), the viewing angle of each user for viewing the virtual scene is usually concentrated in a small portion of the viewing angle, and the adjustment of the viewing angle is rarely performed. That is, in these virtual scenes, the viewing angle parameters used in graphics rendering tend to be concentrated in one or two smaller spaces, e.g., only less than 20% or even less than 1% of the full scene overview viewing angle, while the viewing angles at other times are fixed in a certain position. In such virtual scenes, if the view parameters can be categorized into one or a small number of sparse sets, the continuity of visibility information from a small range of views can be used to predict the scene visibility set to which the view parameter set corresponds. Based on the theory, various embodiments of the application propose a scene model visible set corresponding to a pre-calculated view angle parameter, when a virtual scene is displayed, rendering is submitted through the pre-calculated scene model visible set when the view angle parameter meets the condition, so that vertex coloring of an occluded model is reduced, and rendering efficiency is improved.
The scheme is divided into an offline part and an online part, wherein the offline part is responsible for pre-calculating relevant information of a scene model visible set corresponding to the visual angle parameter, and the online part is responsible for submitting rendering according to the relevant information of the scene model visible set under the specific visual angle parameter in the virtual scene operation process.
Wherein, the offline portion may be executed by the development end device 110, the portion may include: acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene; determining, based on the set of high probability perspectives, visible model portion indication information for indicating model portions of each scene model in the virtual scene that are not occluded under the set of high probability perspectives; generating model visibility information corresponding to the high-probability view angle set; the visibility information is used for indicating the virtual scene display equipment to submit rendering data of the model part indicated by the visibility information to the rendering component when the target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
The online portion may be performed by the virtual scene presentation apparatus 120, and the portion may include: acquiring a target viewing angle, wherein the target viewing angle is a camera viewing angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the access probability contained in the virtual scene in the high-probability view angle set is larger than the camera view angle of the probability threshold; responding to the target view angle belonging to the high-probability view angle set, and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not shielded under the high-probability view angle set; submitting rendering data of the model part indicated by the visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data of the model portion indicated by the visibility information; and displaying a scene picture of the virtual scene.
In the scheme, the indication information of the model part which is not shielded under the high-probability view angle set is generated in advance, and when the target view angle of the user is in the high-probability view angle set in the virtual scene rendering process, the model part which is not shielded under the high-probability view angle set is rendered, and the model part which is shielded does not need to be submitted to rendering, and correspondingly, vertex coloring is not needed to be performed on the model part which is shielded, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Referring to fig. 2, a flowchart of a method for generating information in a virtual scene and displaying a picture according to an exemplary embodiment of the present application is shown. The method may be performed by a computer device, which may be a development end device 110 and a virtual scene presentation device 120 in the system shown in fig. 1. As shown in fig. 2, the method may include the steps of:
in step 201, the development end device acquires a high-probability view angle set corresponding to the virtual scene, where the high-probability view angle set includes a camera view angle with an access probability greater than a probability threshold in the virtual scene.
In this embodiment of the present application, the high probability view angle set corresponding to the virtual scene may include one or a plurality of high probability view angle sets, and there is no intersection between every two of the high probability view angle sets. For example, two sets of high probability views without intersections may be included.
The set of high probability view angles may include one or more camera view angles that are accessed with a high probability. Wherein, a camera view angle is accessed, which means that in the running process of the virtual scene, the camera view angle for observing the virtual scene is set (which may be a default setting of the system or may be set according to the view angle adjustment operation of the user) as the camera view angle.
In one possible implementation, the high probability view-angle set described above may be set manually by a developer/operator.
In another possible implementation, the high probability view set described above may also be obtained by developer device analysis statistics.
For example, the development terminal device may obtain an operation record of the virtual scene, and based on the operation record of the virtual scene, calculate the probability that each camera view corresponding to the virtual scene is accessed, and add, to the high probability view set, a camera view having a probability higher than a probability threshold in each camera view. At this time, the probability threshold may be preset in the development-side device by the developer.
The probability that one camera view is accessed may be a ratio between the number of times that camera view is accessed and the total number of times that each camera view is accessed in the virtual scene.
After the initiating device obtains the high-probability view angle set, the initiating device can determine the visible model part indication information based on the high-probability view angle set, wherein the visible model part indication information is used for indicating model parts of each scene model in the virtual scene, which are not blocked under the high-probability view angle set. The process may refer to step 202 below, as well as to the description below of step 202.
Step 202, a development end device acquires a polygon visibility array of each scene model under the high probability view angle set as the indication information of the visible model part; the polygon visibility array is used to indicate whether the polygons in each of the scene models are visible under the set of high probability view angles, respectively.
In a three-dimensional virtual scene, a virtual scene may contain multiple scene models, such as buildings, virtual characters, virtual terrain, and so on. The scene model is composed of at least two polygons; for example, in the general case, one scene model may be made up of several triangles with common edges between adjacent triangles. The triangles are connected through edges to form the outer surface of a scene model.
In the embodiment of the application, based on the principle that the scene model is composed of polygons, whether all the polygons in the scene model are visible under the high-probability view angle set can be determined, so that the invisible polygons are removed under the high-probability view angle set, and the effect of extracting the blocked parts of the scene model under the high-probability view angle set is achieved.
In one possible implementation, the polygon visibility array includes values corresponding to polygons in the scene model.
The process of obtaining the polygon visibility array of each scene model under the high probability view angle set may be as follows:
acquiring a target polygon which is a polygon in a visible state under a first camera view angle in all polygons contained in a target scene model; the target scene model is a scene model which is blocked under the first camera view angle in each scene model; the first camera view is any one camera view in the high probability view set;
and setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In this embodiment, the development end device may indicate whether each scene model in the virtual scene is visible under the high-probability view angle set through an array, for example, the length of the array may be the number of polygons included in each scene model in the virtual scene, where each value indicates whether a polygon is visible under the high-probability view angle set, for example, for a polygon visible under the high-probability view angle set, the value of the polygon in the array may be 1, otherwise, it is 0.
In one possible implementation, before the target polygon is acquired, the method further includes:
Screening out a first type scene model meeting shielding conditions and a second type scene model meeting shielded conditions from the scene models;
and determining a scene model which is blocked by the first type scene model in the second type scene model under the first camera view angle as the target scene model.
In a virtual scene, there will typically be multiple scene models at the same time, and at a single view, there may be some of the multiple scene models that are occluded, while others are not. If the visibility detection is performed on all polygons in the scene model in the virtual scene, higher calculation amount is introduced, and the offline elimination efficiency is affected.
In this regard, in the embodiment of the present application, before acquiring a target polygon under a target camera view angle, it may be determined first which scene models in a virtual scene are blocked by other scene models under the target camera view angle, and the blocked scene models are determined as the target scene models, then when acquiring the target polygon, the step of acquiring the target polygon is performed only for the blocked scene models, while the polygons in other non-blocked scene models are all considered to be visible under the target camera view angle, and the polygons in the blocked scene models other than the target polygon may be considered to be invisible.
That is, in the above polygon visibility array, the value corresponding to the target polygon in the blocked scene model and the value corresponding to each polygon in the non-blocked scene model may be set to 1, and the value corresponding to the polygon other than the target polygon in the blocked scene model may be set to 0.
In one possible implementation manner, the process of acquiring the target polygon may be as follows:
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of the respective polygons based on the numbers of the vertices of the respective polygons;
performing vertex coloring rendering on the target scene model based on the first camera view angle to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of each polygon based on color values on each pixel point in the vertex coloring rendering image;
the target polygon is acquired based on visible ones of the vertices of the respective polygons.
In the embodiment of the application, it may be determined by offline rendering, which polygons are visible and which are not visible in the scene model. For example, the developer device assigns different color values to the vertices of each polygon in a scene model, then performs vertex coloring on the scene model according to the previously set rendering related parameters to obtain a vertex-colored image, wherein the vertices of visible polygons are mapped into the image, and at this time, the developer device traverses the color values of each pixel point in the image again, so that the vertices corresponding to the traversed color values can be determined as visible vertices, and then the visible polygons in the scene model can be obtained through the visible vertices.
In practical application, the polygons in each scene model may be triangles, and there may be a case of sharing vertices between the triangles, in this case, the program may unpack the sharing vertices as non-sharing vertices, where unpacked vertices have overlapping vertex positions, but may have vertex colors corresponding to the triangles to which each belongs, and since the process of unpacking the sharing vertices is to solve the visibility of the triangles to which each belongs, after obtaining the corresponding visibility information, the model created temporarily may be discarded, so that the topology structure of the model finally displayed is not affected. Because the multi-sampling anti-aliasing function is turned off when rendering is performed, the situation that two colors are mixed in the same pixel point cannot occur, and if the triangle is smaller than one pixel point, the triangle may be hidden. But the resolution of the rendering at the time of pre-calculation can be several times as large as the resolution at the time of final rendering, so that setting different colors to the common vertexes does not affect the subsequent decoding of vertex numbers.
In this embodiment of the present application, the developer device may reorganize the vertex array of the model according to the triangle numbers, and assign the vertex color determined by each triangle, and the coding mode of the numbered index to the color is as follows:
Wherein color consists of three channels of red, green and blue, and has a value of 0 to 255. The corresponding decoding mode of the color to the numbered index can be obtained, and the following formula is adopted:
index(color)=color blue ×256 2 +color green ×256+color red -1
the development end equipment can generate model visibility information corresponding to the high-probability view angle set based on the polygon visibility array; wherein the model visibility information is used to indicate model portions of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives. The process of generating the model visibility information may refer to the descriptions of steps 203 to 206 described below.
Step 203, the development end device orders the polygons of each scene model based on the polygon visibility array; the polygons visible under the set of high probability view angles are contiguous in the ordered polygons of each of the scene models.
In the embodiment of the application, in order to improve the rendering and submitting efficiency of the sub-model corresponding to one high-probability view angle set in the post-virtual scene display process, the development end device may reorder the related data (including the polygon array and the vertex array) of the polygons of each scene model in the virtual scene according to the polygon visibility array, that is, arrange the related data of the polygons visible under the same high-probability view angle set, so as to facilitate quick query of the rendering data to be submitted in the subsequent submitting process.
Step 204, the development terminal equipment obtains the polygonal visible information of the model part which is not shielded based on the polygonal index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of each scene model after sorting; the polygon visible information includes index intervals of polygons in the non-occluded model portion.
In the embodiment of the present application, in order to improve the query efficiency during the subsequent rendering, the index of the polygon may be renumbered according to the sorting result in the step 203, so as to query the polygon array corresponding to the high probability view angle set.
Step 205, the development terminal equipment obtains the vertex visible information of the model part which is not shielded based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertices in the ordered polygons of each scene model; the vertex visibility information includes index intervals of polygon vertices in the non-occluded model portion.
Similar to the index of the polygon, in the embodiment of the present application, the index of the vertex of the polygon may be further renumbered according to the sorting result in the step 203, so as to query the vertex array corresponding to the high probability view-angle set later.
In step 206, the development end device obtains the polygon visible information of the model part which is not blocked and the vertex visible information of the model part which is not blocked as the model visible information corresponding to the high probability view angle set.
That is, in the embodiment of the present application, in the offline stage, the development end device uses the polygons and the vertices of the polygons as granularity, and eliminates the model portion blocked in the virtual scene for the high-probability view angle set, and leaves the model portion not blocked as a sub-model of the virtual scene corresponding to the high-probability view angle set.
In the scheme shown in the embodiment of the application, the development end equipment can provide a rendering engine tool plug-in, so that support is provided for the graphic rendering development flow. The developer can set pre-calculation parameters (including the high probability view angle set) in the engine, open the scene and call the pre-calculation command, thus completing the pre-calculation stage of the model scene.
The setting interface of the pre-calculation parameters can comprise a camera setting part of each view parameter set, and can also comprise a possible transformation information set of the camera in the view parameter set. In addition, in order to facilitate the subsequent determination of the model portion that is not occluded by means of rendering, rendering-related parameters may be further set, such as a shader for drawing the vertex color during pre-calculation, a rendering resolution set, and a rendering precision magnification, etc.
Fig. 3 is a schematic diagram of a pre-calculation parameter setting interface according to an embodiment of the present application. As shown in fig. 3, camera mode A (Camera Pattern A) 31 is the camera settings portion of the first view parameter set, and transform information A (Transforms A) contains the set of possible transform information for the camera in the first view parameter set. Likewise, camera mode B (Camera Pattern B) and transform information B (Transforms B) are relevant settings for the second set of view parameters. A vertex Color Shader (Color Shader) 33 is a Shader used to draw vertex colors in a pre-calculation process. The rendering resolution (Screen Size) 34 is a rendering resolution set common to two view parameter sets. Precision magnifications (accuracyTimes) 35 are used to set the precision magnification at which the pre-calculation is rendering.
After the setting is completed, a scene requiring pre-calculation is opened, and fig. 4 is a schematic diagram of an engine menu bar according to an embodiment of the present application. As shown in fig. 4, the pre-calculation command and the debug command may be invoked through the menu bar 41 of the engine.
After the debug window command is opened, the engine may expose a debug window, and fig. 5 is a schematic diagram of a debug window according to an embodiment of the present application. As shown in fig. 5, a related popup window 51 may be displayed in the window, and the number of scene models that the current virtual scene has may be displayed in the related popup window 51.
Clicking different buttons in the editor displays the model in the scene according to different screening rules to provide a developer with a view of whether the occlusion and the occluded object in the current scene are screened correctly.
After the pre-computation is completed, the scene is run and the components performing the pre-computation can be found in the camera object. Operations for occluded model portion culling for various view parameter sets may be contained in the context menu of the component. The above components provide an interface on code, and may also provide a method for developers to trigger pre-computation parameters from other logical scripts.
Fig. 6 is a culling control interface according to an embodiment of the present application. As shown in fig. 6, in the context menu 61 corresponding to the "VRIBO control" component in the camera object, there are "STA", "STB" and "RM" button controls, and clicking the above button controls by the user may enable rejection of the scene under view a, rejection under view B, and non-rejection, respectively. The component provides an interface on the code and also provides a method for the developer to trigger the sub-model settings from other logical scripts.
After generating the model visibility information corresponding to the high-probability view angle set, the developer device may deploy the high-probability view angle set and the model visibility information corresponding to the high-probability view angle set as a part of rendering data of the virtual scene or as associated data of the virtual scene to the virtual scene display device.
Step 207, the virtual scene display device acquires a target viewing angle, which is a camera viewing angle for observing the virtual scene; the virtual scene corresponds to a set of high probability perspectives that include camera perspectives having an access probability greater than a probability threshold in the virtual scene.
In the embodiment of the application, the virtual scene display device can acquire the camera view angle for observing the virtual scene at the current moment in the process of displaying the virtual scene, so as to obtain the target view angle.
Step 208, the virtual scene display device responds that the target view belongs to the high probability view set, and obtains model visibility information corresponding to the high probability view set; the model visibility information is used to indicate model portions of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives.
The virtual scene display device can detect whether the target view belongs to a high-probability view set, if so, the processor (such as a CPU) can submit rendering data of a sub-model corresponding to the high-probability view set to a rendering component (such as a GPU) for rendering, so that the rendering component only needs to perform vertex coloring on the visible sub-model, and does not need to perform vertex coloring on a complete scene model in the virtual scene. In this step, if the target view belongs to the high probability view set, the virtual scene display device may acquire the model visibility information corresponding to the high probability view set.
The model visibility information may indicate vertex indexes and polygon indexes of polygons corresponding to model parts which are not blocked under the high-probability view angle set, and may be used for querying rendering data corresponding to model parts which are not blocked under the high-probability view angle set.
Step 209, the virtual scene presentation device submits the rendering data of the model portion indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component.
As can be seen from the above steps, the visibility information includes polygon visibility information of the non-occluded model portion and polygon vertex visibility information of the non-occluded model portion. In the embodiment of the application, the virtual scene display device can read the corresponding variable array of each polygon in the model part which is not shielded under the high-probability view angle set through the visible information of the polygon, read the vertex array of each polygon vertex in the model part which is not shielded, and submit the read polygon array and vertex array to the rendering component for rendering so as to render the submodel under the high-probability view angle set.
In step 210, the virtual scene presentation device renders a scene picture of the virtual scene based on the scene model of the virtual scene in response to the target perspective not belonging to the set of high probability perspectives.
In the embodiment of the present application, if the target view angle does not belong to the high probability view angle set, the processor in the virtual scene display device may submit rendering data of each scene model in the virtual scene to the rendering component for rendering, so as to ensure normal display under the camera view angle corresponding to the non-high probability view angle set.
Step 211, the virtual scene display device displays a scene picture of the virtual scene.
In summary, in the scheme shown in the embodiment of the present application, in the virtual scene, the indication information of the model portion that is not blocked under the high-probability view angle set is generated in advance, and in the virtual scene rendering process, when the target view angle of the user is in the high-probability view angle set, the model portion that is not blocked under the high-probability view angle set is rendered, and the blocked model portion does not need to be submitted to be rendered, and accordingly, vertex coloring is not needed to be performed on the blocked model portion, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
The solution shown in the foregoing embodiment corresponding to fig. 2 of the present application may be divided into three aspects, namely: system design, system pre-calculation implementation process, system runtime implementation process.
1. System design
When scene rendering is performed, the whole set of camera view parameters e is Ue, and p (e) is the activity probability of rendering the camera parameters e. Then there are:
e∈Ue p(e)=1
according to the frequent active range of view e, two disjoint sparse view sets, se_1 and se_2, are created to cover as high active probabilities p (se_1) and p (se_2) as possible. This time Se o =Ue-Se 1 -Se 2 As a degradation scheme when the rejection condition is not satisfied, the activity probability is p (Se 0 )=1-p(Se 1 )-p(Se 2 )。
Defining a subset of the triangle set Ut as St, defining a visible set at a certain view angle e as St (e), and when the view angle e is uncertain, determining the visibility as a complete set Ut, and determining the visible sets st_e1 and st_e2 of se_1 and se_2 by calculation in a pre-calculation stage to obtain:
four sets St_1, st_2, st_3, st_4 are then created. Such that:
the vertex array and the index array of the model are rearranged according to the sequences St_1, st_2, st_3 and St_4, and different subsets of the model can be submitted and rendered under different view angle sets:
A vertex/triangle array visualization schematic of the model may be as shown in fig. 7.
In modern graphics rendering pipelines, when rendering is submitted to a GPU (Graphics Processing Unit, graphics processor), only a portion of the triangle array may be specified to be drawn, or the subinterval of the vertex array to which the portion of the triangle corresponds may be specified, so as to implement triangle culling and vertex culling. In actual engineering implementation, the vertex can also be processed according to the thought of triangle recombination, so that triangle elimination and vertex elimination are simultaneously carried out when the program is implemented in Se_1 and Se_2. After the pre-calculation phase is completed, new model data is saved and information of the sub-model is derived.
When the method is operated, the corresponding visible set is obtained by inquiring the view angle set where the current view angle is located, and the information of the submodel is sent to the GPU, so that the scene triangle and the vertex can be removed under extremely low consumption.
2. System pre-calculation implementation process
A pre-calculation implementation flowchart of the system may be as shown in fig. 8. First, a program pre-calculation configuration is input (i.e., step S81), and then, the program acquires scene settings, resulting in settings related to the scene, sparse view, and pre-calculation, thereby setting the scene (i.e., step S82). Then, finding out the shelters and the shelters in the scene, setting vertex colors for all vertexes according to the index sequence of the scene triangle, and closing other irrelevant renderers; traversing the view angle set Se_1, rendering and calculating the visibility of St_e1; the view-angle set se_2 is traversed, and the visibility of st_e2 is rendered and calculated (i.e., step S83). Then, reorganizing each occluded object model of the scene (i.e. step S84), obtaining vertex visibility according to the triangle visibility, reorganizing the vertex indexes of the triangle according to the vertex and triangle of the visibility reorganization model, outputting the reorganized model file and calculating the sub-model information of st_e1 and st_e2 (i.e. step S85).
The main steps in the above process are realized in detail as follows:
1) Setting up a scene
Setting a scene, and acquiring a blocking object and a blocked object of the current scene according to rules, wherein the blocking object is defined as a model which can block other models during real-time rendering and can be screened as a model of a static and non-semitransparent material; the occluded object is defined as a model which can be subjected to triangle elimination according to the view angle in real-time rendering, and the following condition screening can be referred to: static, non-translucent materials, sufficient culling value, etc.
The existing scene and model are backed up, other irrelevant rendering components are closed, the screened and shielded objects are numbered in a triangle, the vertex array of the model is reorganized according to the triangle numbers, the determined vertex color of each triangle is given, and the coding mode of the numbered index to the color is as follows:
wherein color consists of three channels of red, green and blue, and has a value of 0 to 255. The corresponding decoding mode of the color to the numbered index can be obtained, and the following formula is adopted:
index(color)=color blue ×256 2 +color green ×256+color red -1
2) Computing visibility of view-angle sets
This step obtains the visible sets st_e1 and st_e2 from the view parameter sets se_1 and se_2, respectively, taking se_1 as an example, and inputs s the view parameter set se_1 and outputs st_e1, and the output flow chart of the triangular visible set thereof can be shown in fig. 9. Firstly, inputting a view parameter set se_1 (i.e. step S91), setting maxinex as the total number of occluded object triangles (i.e. step S92), initializing a result array with the length of maxinex, setting all to 0 (S93), then traversing all view parameters of the view parameter set se_1, judging whether to traverse all view parameters of se_1 (i.e. step S94), if all view parameters of se_1 are not traversed, setting the view parameters of a camera (i.e. step S95), then rendering the current scene with a shader drawing vertex colors (i.e. step S96), then reading back the rendering result (i.e. step S97), judging whether to traverse each pixel of the complete frame buffer (i.e. step S98), if each pixel of the complete frame buffer is judged to be traversed, executing step S94, if each pixel of the complete frame buffer is not traversed, decoding the pixel colors to be numbered index (i.e. step S99), setting the array result index to 1 (i.e. step S910), and then, if step S98 is executed, continuing to execute step S94, and if step S1 is completed, judging that all the view parameters are completed, corresponding to step 1.
In this process, the view parameters should include the rendering resolution of the camera, the view port aspect ratio, the field angle, the camera position, and the amount of spatial rotation. It should be noted that before invoking rendering, it should be ensured that the multisampling antialiasing and high dynamic range functions of the camera are turned off, the background color is set to black, and the rendering target is set to the rendering texture of a given resolution.
When the rendering result is read back, the current active rendering texture is set as the rendering texture which is just used for rendering, and then the ReadPixels instruction is executed to read the data from the GPU end to the CPU end, so that a color array with the size of rendering resolution can be obtained, and the triangle corresponding to the pixel point can be obtained by the decoding formula in the step 1.
3) Reorganizing a shelter model
This step is a key step in the present application, after step 2, the program obtains two arrays St_e1 and St_e2 for the visibility of the scene triangle, both arrays having a length of MaxIndex. An information visualization diagram obtained by stitching together two visible information about triangles and a triangle index array of a model of an original scene can be shown in fig. 10, wherein a data portion is random data.
Taking model 1 as an example, the process of model reorganization comprises the following steps:
s1, intercepting current model triangle visibility St_e1 and St_e2 from scene triangle visibility result
S2, calculating model vertex visibility Sv_e1 and Sv_e2 according to the model triangle visibility
S3, recombining the vertex array
S4, recombining the triangle array
S5, updating the vertex information corresponding to the triangle array into a new vertex index
S6, outputting model and sub-model information
Referring to fig. 11, a vertex visibility output schematic diagram according to an embodiment of the present application is shown. Since the triangle array stores model vertex indexes of a triplet, vertex visibility sv_e1 can be obtained by st_e1 by the method shown in fig. 11. Firstly, inputting a model and triangle visible information of the model (i.e. step S1101), obtaining vertex total number of vertexCount (i.e. step S1102), setting all the arrays st_e1 with initialization length of vertexCount to 0 (i.e. step S1103), judging whether traversing is completed or not (i.e. step S1104), judging whether the triangle is visible or not (i.e. step S1105), if judging that the triangle is invisible, continuing to execute step S1104, if judging that the triangle is visible, taking out vertex indexes a, b and c of the visible triangle (i.e. step S1106), setting st_e1[ a ], st_e1[ b ] and st_e1[ c ] to 1 (i.e. step S), and if judging that traversing is completed, outputting vertex visibility arrays sv_e1 (i.e 1108).
Taking triangle 2 as an example, in step 2, the program knows that the triangle is visible under se_1 and invisible under se_2, and then its corresponding vertex 4, vertex 5, and vertex 6 are also visible under se_1 and invisible under se_2. Through this process, a visual set diagram of the visualization may be as shown in fig. 12. Wherein the vertex attributes of the vertex array represent information of each vertex, which may include vertex coordinates, texture coordinates of the vertex, vertex normals, vertex tangents, etc., and the whole vertex attribute data are connected together to be the actual content of the vertex array according to the derived data in the three-dimensional modeling of the model.
In order to be able to reject vertex information under se_1 and se_2, the procedure needs to reassemble the entire vertex attribute array according to the resulting sv_e1 and sv_e2. The manner in which the vertex data is reassembled is the same as the triangle array reassembly given in the system design. Solving Sv_e1 n Sv_e2, setting the vertex array corpus as Ut, and solving the following 4 temporary sections:
then, the vertex arrays are rearranged according to the sequences of Sv1, sv2, sv3 and Sv4, so that a new vertex array can be obtained, and a schematic diagram of the new vertex array can be shown in FIG. 13. It can be seen that after recombination, the visibility corresponding to sv_1, sv_2, sv_3, sv_4 is e1 visible e2 invisible, e1 and e2 visible, e1 invisible e2 visible, e1 and e2 invisible, respectively.
Among the indexes of the new vertex array are:
then the vertex visible subintervals vbo_e0, vbo_e1, vbo_e2 under se_0, se_1, se_2 can be obtained as follows. Where VBO is a binary value, defined as { offset, count }, offset is the offset in the array of vertices, count is the number of vertices.
And storing the whole vertex array by using a new vertex array index sequence to replace the original vertex array, thus finishing the recombination of the vertex arrays.
The triangle array is then reordered in the same manner, and a schematic diagram of the reordered triangle array may be as shown in fig. 14.
The same can be obtained:
and triangle visible subintervals ibo_e0, ibo_e1, ibo_e2 under se_0, se_1, se_2 are as follows:
the final step of the triangle array reorganization also needs to be considered, and the positions of the vertexes are changed and need to be remapped as the vertex array reorganizes. For example, the old vertex index corresponding to the first triangle of the new vertex index array is 4,5,6, and the corresponding three vertex attributes are E, F, G, but the three vertices are not at 4,5,6 but at 0,1,2 inside the new vertex array. It is required to reverse the mapping by the transformation relation according to the vertex array. The exact indices 0,1,2 are obtained, and the remapped vertex index schematic can be shown in fig. 15.
After the mapping is completed, the new vertex index is used for replacing the triangle array content, the whole triangle array is saved, and the recombination of the triangle array is completed.
Since the length of the model before reassembly is the same as that of the model after reassembly, and the sets Ut and Uv are visible for se_0, no additional save data is required for se_0, while for se_1 and se_2 the data that needs to be saved are: IBO_e1, VBO_e1 and IBO_e2, VBO_e2. The data to be saved is the model visibility information corresponding to the high probability view angle set in the embodiment shown in fig. 2.
The processed model (comprising the vertex array and the triangle array after rearrangement numbering) is used for replacing the original model, and sub-model interval information (namely the model visibility information) is stored in a renderer script of a scene, so that the pre-calculation process can be completed.
3. System runtime implementation process
A schematic diagram of a virtual scene runtime implementation process may be shown in fig. 16. The procedure needs to collect the current view parameter e (i.e. step S1601), then calculate the corresponding sparse view set (i.e. step S1602) according to the definition and classification of the sparse view set in the system design chapter, determine whether it belongs to the se_1 range (i.e. step S1603), set the visible set as st_e1 (i.e. step S1604), determine whether it does not belong to the se_1 range (i.e. step S1605), set the visible set as st_e2 (i.e. step S1606), if it does not belong to the se_2 range, set the visible set as st_e0 (i.e. step S1607), take out the corresponding VBO and IBO subinterval information, submit the set visible sets st_e1, st_e2 and st_e0 to the GPU through the submodel information of the set model, and render (i.e 1608).
The application provides a scene triangle and vertex eliminating scheme under sparse view parameters, which is suitable for graphic products with sparse view parameters and higher scene complexity, and is compatible to operate in different engines, equipment and platforms by using low to negligible consumption in operation and memory occupation to replace larger eliminating rate.
The application solves the problem that the prior scheme in the industry cannot solve: in the existing scheme, the scheme for dynamically eliminating in the CPU stage takes the model as coarse granularity and cannot be eliminated more finely; the static eliminating scheme of the scene can reach fine granularity, but the eliminating rate is not high due to the large degree of freedom of the visual angle space; the process of dynamically eliminating the model in fine granularity occurs in the GPU stage, the consumption of GPU bandwidth and pixel shader is still large, and ineffective optimization is brought in a scene with a large number of vertexes. The method and the device provide a dynamic rejection scheme which can be executed at the CPU end, ensure ultra-high rejection fineness and rejection accuracy, and simultaneously bring additional performance and space consumption.
In the scheme, the rejection benefits of the system are theoretically:
in order to increase the benefit of the rejection system, two sparse view angle sets se_1 and se_2 with high activity probability and high rejection rate should be selected as much as possible. For example, in the double chess fight game rendering, the camera frequently moves in the peripheral area of the visual angle of the chess player, and Se_1 and Se_2 are arranged in the two areas, so that a good rejecting effect can be brought. In this type of scene, since the frequency of variation of the camera parameters is not high, the culling process can be optimized logically in the program of the scene, i.e. the camera parameter view-angle set does not need to be recalculated every frame, and the one-time setting can be performed at the time of view-angle transition.
In addition, in some shadow-rendered scenes, one of the sparse view angle sets can be set as a light source projection view angle, so that a shadow rendering stage can be greatly improved. By setting the proxy model of the shadow casting process during the rendering operation, the shadow rendering result can be correctly represented on the picture.
Because the visibility set is determined in a pre-calculation process in a manner of rendering and readback visibility, the rejection comprises three types of occlusion rejection, surface rejection and view cone rejection, the rejection effect is considerable, the functions of CPU view cone rejection, preZ rejection and the like are not required to be started, and the running consumption of the system can be further reduced.
Through actual project tests, the method and the device are verified in the rendering of a double chess game. The camera parameter set is a full scene, wherein more than 99% of parameters are concentrated around the view angles of both players, se_1 and Se_2 are respectively determined as the movable area ranges of the view angles of both players, and the resolution of a plurality of devices is adopted as rendering resolution for pre-calculation. The comparison results obtained are greatly improved in both scenes, and the test results are shown in the following table 1 (data contains partial dynamic objects which cannot be removed, etc.):
TABLE 1
For example, a view a of scene 2 a preview of the scene before culling may be as shown in fig. 17. After the culling is completed, the scene preview when previewing sv_e1 and sv_e2 under the free view angle may be as shown in fig. 18. And the actual rendering result of the camera is not different before and after the rejection.
Fig. 19 is a block diagram of a screen display device according to an exemplary embodiment of the present application, which may be used to perform all or part of the steps performed by the virtual scene display apparatus in the method shown in fig. 2 described above. As shown in fig. 19, the apparatus includes:
a view angle acquisition module 1901 for acquiring a target view angle, which is a camera view angle at which a virtual scene is observed; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
a visibility information obtaining module 1902, configured to obtain model visibility information corresponding to the high-probability view angle set in response to the target view angle belonging to the high-probability view angle set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not blocked under the high-probability view angle set;
A rendering module 1903 for submitting rendering data of the model portion indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene by the rendering component;
and a display module 1904, configured to display a scene picture of the virtual scene.
In one possible implementation, the scene model is composed of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
In one possible implementation, the polygon visible information includes an index interval of polygons in the non-occluded model portion;
the vertex visibility information includes index intervals of polygon vertices in the non-occluded model portion.
In one possible implementation, the apparatus further includes:
and the picture rendering module is used for responding to the target view angle belonging to the high-probability view angle set and rendering a scene picture of the virtual scene based on the scene model of the virtual scene.
In summary, in the scheme shown in the embodiment of the present application, in the virtual scene, the indication information of the model portion that is not blocked under the high-probability view angle set is generated in advance, and in the virtual scene rendering process, when the target view angle of the user is in the high-probability view angle set, the model portion that is not blocked under the high-probability view angle set is rendered, and the blocked model portion does not need to be submitted to be rendered, and accordingly, vertex coloring is not needed to be performed on the blocked model portion, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Fig. 20 is a block diagram of a picture generation device according to an exemplary embodiment of the present application, where the device may be configured to perform all or part of the steps performed by a development end apparatus in the method shown in fig. 2. As shown in fig. 20, the apparatus includes:
a view-angle-set acquiring module 2001, configured to acquire a high-probability view angle set corresponding to a virtual scene, where an access probability included in the virtual scene in the high-probability view angle set is greater than a probability threshold;
an indication information acquisition module 2002 for determining, based on the high probability view angle set, visible model part indication information for indicating model parts of each scene model in the virtual scene that are not occluded under the high probability view angle set;
a visibility information generating module 2003, configured to generate model visibility information corresponding to the high probability view angle set; the model visibility information is used for indicating the virtual scene display equipment to submit rendering data of a model part indicated by the model visibility information to a rendering component when a target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
In one possible implementation, the scene model is composed of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
In one possible implementation manner, the indication information obtaining module includes:
the array acquisition submodule is used for acquiring polygon visibility arrays of each scene model under the high-probability view angle set and taking the polygon visibility arrays as the visible model part indication information; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set respectively.
In one possible implementation manner, the polygon visibility array includes values corresponding to polygons in the scene models respectively;
the array acquisition sub-module comprises:
a polygon acquisition unit configured to acquire a target polygon, where the target polygon is a polygon that is in a visible state at a first camera view angle among the polygons included in the target scene model; the target scene model is a scene model which is blocked under the first camera view angle in each scene model; the first camera view is any one camera view in the high probability view set;
And the numerical value setting unit is used for setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In one possible implementation, the apparatus further includes:
the model screening unit is used for screening a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models before the target polygon is acquired;
and the target determining unit is used for determining a scene model which is blocked by the first type scene model under the first camera view angle from the second type scene model as the target scene model.
In one possible implementation, the polygon acquiring unit is configured to, in use,
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertexes of the polygons based on the numbers of the vertexes of the polygons;
performing vertex coloring rendering on the target scene model based on the first camera view angle to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of all polygons based on color values on all pixel points in the vertex coloring rendering image;
The target polygon is acquired based on visible vertices among the vertices of the respective polygons.
In one possible implementation manner, the visibility information generating module includes:
the sequencing sub-module is used for sequencing the polygons of each scene model based on the polygon visibility array; the polygons visible under the high probability view angle set are continuous in the polygons of each scene model after sequencing;
the first information acquisition sub-module is used for acquiring the polygonal visible information of the model part which is not shielded based on the polygonal index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of each scene model after sequencing; the polygon visible information includes index intervals of polygons in the model portion that is not occluded;
the second information acquisition sub-module is used for acquiring vertex visible information of the model part which is not shielded based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertices in the ordered polygons of each scene model; the vertex visual information comprises index intervals of polygon vertexes in the model part which is not blocked;
And the visibility information generation sub-module is used for acquiring the polygonal visibility information of the non-occluded model part and the vertex visibility information of the non-occluded model part as the model visibility information corresponding to the high-probability view angle set.
In summary, in the scheme shown in the embodiment of the present application, in the virtual scene, the indication information of the model portion that is not blocked under the high-probability view angle set is generated in advance, and in the virtual scene rendering process, when the target view angle of the user is in the high-probability view angle set, the model portion that is not blocked under the high-probability view angle set is rendered, and the blocked model portion does not need to be submitted to be rendered, and accordingly, vertex coloring is not needed to be performed on the blocked model portion, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Fig. 21 is a schematic diagram of a computer device, according to an example embodiment. The computer device may be implemented as a development end device or a virtual scene presentation device in the system shown in fig. 1.
The computer apparatus 2100 includes a central processing unit (CPU, central Processing Unit) 2101, a system Memory 2104 including a random access Memory (Random Access Memory, RAM) 2102 and a Read-Only Memory (ROM) 2103, and a system bus 2105 connecting the system Memory 2104 and the central processing unit 2101. Optionally, the computer device 2100 also includes a basic input/output system 2106 to facilitate the transfer of information between the various devices within the computer, and a mass storage device 2107 for storing an operating system 2113, application programs 2114 and other program modules 2115.
The mass storage device 2107 is connected to the central processing unit 2101 through a mass storage controller (not shown) connected to the system bus 2105. The mass storage device 2107 and its associated computer-readable media provide non-volatile storage for the computer device 2100. That is, the mass storage device 2107 may include a computer readable medium (not shown) such as a hard disk or a compact disk-read Only Memory (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, flash memory or other solid state memory technology, CD-ROM, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 2104 and mass storage 2107 described above may be referred to collectively as memory.
The computer device 2100 may connect to the internet or other network device through a network interface unit 2111 connected to the system bus 2105.
The memory further includes one or more programs, the one or more programs being stored in the memory, and the central processor 2101 implements all or part of the steps performed by the development end device in the method shown in fig. 2 by executing the one or more programs; alternatively, the central processor 2101 implements all or part of the steps performed by the virtual scene presentation apparatus in the method illustrated in fig. 2 by executing the one or more programs.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into the terminal. The computer readable storage medium has stored therein at least one computer program that is loaded and executed by a processor to implement the methods described in the various embodiments of the present application.
Alternatively, the computer-readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), solid state disk (SSD, solid State Drives), or optical disk, etc. The random access memory may include resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods described in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the presently disclosed aspects. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise construction/arrangements shown in the drawings and described above, and that various modifications and changes may be effected therein without departing from the scope thereof. The scope of the application is limited by the appended claims.

Claims (15)

1. A picture presentation method, the method comprising:
acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
responding to the target view belongs to the high-probability view set, and acquiring model visibility information corresponding to the high-probability view set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not blocked under the high-probability view angle set;
Submitting rendering data of the model part indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component;
and displaying the scene picture of the virtual scene.
2. The method of claim 1, wherein the scene model is comprised of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the polygon visible information includes index intervals of polygons in the model portion that is not occluded;
the vertex visibility information includes index intervals of polygon vertices in the non-occluded model portion.
4. The method according to claim 1, wherein the method further comprises:
and rendering a scene picture of the virtual scene based on a scene model of the virtual scene in response to the target perspective not belonging to the set of high probability perspectives.
5. An information generation method, the method comprising:
Acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
determining visible model part indication information based on the high-probability view angle set, wherein the visible model part indication information is used for indicating model parts of each scene model in the virtual scene, which are not shielded under the high-probability view angle set;
generating model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the virtual scene display equipment to submit rendering data of a model part indicated by the model visibility information to a rendering component when a target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
6. The method of claim 5, wherein the scene model is comprised of at least two polygons; the model visibility information includes polygon visibility information of the non-occluded model portion and vertex visibility information of a polygon of the non-occluded model portion.
7. The method of claim 6, wherein the determining, based on the set of high probability perspectives, model portions of the respective scene model in the virtual scene that are not occluded under the set of high probability perspectives comprises:
Acquiring a polygon visibility array of each scene model under the high probability view angle set as the visible model part indication information; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set respectively.
8. The method of claim 7, wherein the polygon visibility array includes values corresponding to polygons in each of the scene models;
the obtaining a polygon visibility array of each scene model under the high probability view angle set comprises the following steps:
acquiring a target polygon, wherein the target polygon is a visible polygon in a first camera view angle in all polygons contained in a target scene model; the target scene model is a scene model which is blocked under the first camera view angle in each scene model; the first camera view is any one camera view in the high probability view set;
and setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
9. The method of claim 8, wherein prior to obtaining the target polygon, the method further comprises:
Screening out a first type scene model meeting shielding conditions and a second type scene model meeting shielded conditions from the scene models;
and determining a scene model which is blocked by the first type scene model from the second type scene model under the first camera view angle as the target scene model.
10. The method of claim 8, wherein the obtaining the target polygon comprises:
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertexes of the polygons based on the numbers of the vertexes of the polygons;
performing vertex coloring rendering on the target scene model based on the first camera view angle to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of all polygons based on color values on all pixel points in the vertex coloring rendering image;
the target polygon is acquired based on visible vertices among the vertices of the respective polygons.
11. The method of claim 7, wherein generating model visibility information corresponding to the set of high probability perspectives comprises:
Ordering polygons of each scene model based on the polygon visibility array; the polygons visible under the high probability view angle set are continuous in the polygons of each scene model after sequencing;
based on the polygon index numbering result, obtaining the polygon visible information of the model part which is not shielded; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of each scene model after sequencing; the polygon visible information includes index intervals of polygons in the model portion that is not occluded;
obtaining vertex visible information of the model part which is not shielded based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertices in the ordered polygons of each scene model; the vertex visual information comprises index intervals of polygon vertexes in the model part which is not blocked;
and obtaining the polygon visible information of the model part which is not shielded and the vertex visible information of the model part which is not shielded as the model visible information corresponding to the high probability view angle set.
12. A picture display device, the device comprising:
the visual angle acquisition module is used for acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
the visibility information acquisition module is used for responding to the target view belongs to the high-probability view set and acquiring model visibility information corresponding to the high-probability view set; the model visibility information is used for indicating model parts of each scene model in the virtual scene which are not blocked under the high-probability view angle set;
a rendering module for submitting rendering data of the model part indicated by the model visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component;
and the display module is used for displaying the scene picture of the virtual scene.
13. An information generating apparatus, characterized in that the apparatus comprises:
the view angle set acquisition module is used for acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
An indication information acquisition module, configured to determine, based on the high-probability view angle set, visible model part indication information, where the visible model part indication information is used to indicate model parts of each scene model in the virtual scene that are not occluded under the high-probability view angle set;
the visibility information generation module is used for generating model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the virtual scene display equipment to submit rendering data of a model part indicated by the model visibility information to a rendering component when a target view belongs to the high-probability view set; the target view angle is a camera view angle at which a virtual scene is observed.
14. A computer device comprising a processor and a memory, wherein the memory stores at least one computer program, the at least one computer program being loaded and executed by the processor to implement the picture presentation method of any one of claims 1 to 4 or the information generation method of any one of claims 5 to 11.
15. A computer readable storage medium, wherein at least one computer program is stored in the readable storage medium, the at least one computer program being loaded and executed by a processor to implement the picture presentation method of any one of claims 1 to 4 or the information generation method of any one of claims 5 to 11.
CN202110805394.8A 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium Active CN113457161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110805394.8A CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110805394.8A CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113457161A CN113457161A (en) 2021-10-01
CN113457161B true CN113457161B (en) 2024-02-13

Family

ID=77880685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110805394.8A Active CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113457161B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429513B (en) * 2022-01-13 2025-07-11 腾讯科技(深圳)有限公司 Visible element determination method and device, storage medium and electronic device
CN117839202A (en) * 2022-09-30 2024-04-09 腾讯科技(深圳)有限公司 Scene picture rendering method, device, equipment, storage medium and program product
CN115729838A (en) * 2022-12-01 2023-03-03 网易(杭州)网络有限公司 Test method, device, electronic equipment and storage medium for scene rendering effect

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102254338A (en) * 2011-06-15 2011-11-23 西安交通大学 Automatic obtaining method of three-dimensional scene optimal view angle based on maximized visual information
CN102982159A (en) * 2012-12-05 2013-03-20 上海创图网络科技发展有限公司 Three-dimensional webpage multi-scenario fast switching method
CN107230248A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Viewpoint selection in virtual 3D environment
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN111080762A (en) * 2019-12-26 2020-04-28 北京像素软件科技股份有限公司 Virtual model rendering method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456285B2 (en) * 1998-05-06 2002-09-24 Microsoft Corporation Occlusion culling for complex transparent scenes in computer generated graphics
KR102137263B1 (en) * 2014-02-20 2020-08-26 삼성전자주식회사 Image processing apparatus and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102254338A (en) * 2011-06-15 2011-11-23 西安交通大学 Automatic obtaining method of three-dimensional scene optimal view angle based on maximized visual information
CN102982159A (en) * 2012-12-05 2013-03-20 上海创图网络科技发展有限公司 Three-dimensional webpage multi-scenario fast switching method
CN107230248A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Viewpoint selection in virtual 3D environment
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN111080762A (en) * 2019-12-26 2020-04-28 北京像素软件科技股份有限公司 Virtual model rendering method and device

Also Published As

Publication number Publication date
CN113457161A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN113457161B (en) Picture display method, information generation method, device, equipment and storage medium
EP0531157B1 (en) Three dimensional graphics processing
KR101286318B1 (en) Displaying a visual representation of performance metrics for rendered graphics elements
US6529207B1 (en) Identifying silhouette edges of objects to apply anti-aliasing
CN111986304B (en) Render scenes using a combination of ray tracing and rasterization
US10586375B2 (en) Hybrid raytracing approach for modeling light reflection
KR101267120B1 (en) Mapping graphics instructions to associated graphics data during performance analysis
US20100020069A1 (en) Partitioning-based performance analysis for graphics imaging
JP2004038926A (en) Texture map editing
US20170365090A1 (en) Graphics processing systems
US11361477B2 (en) Method for improved handling of texture data for texturing and other image processing tasks
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
CN113593028A (en) Three-dimensional digital earth construction method for avionic display control
CN114494024B (en) Image rendering method, device and equipment and storage medium
CN113900797A (en) Three-dimensional oblique photography data processing method, device and equipment based on illusion engine
US10839600B2 (en) Graphics processing systems
CN116712727A (en) Same-screen picture rendering method and device and electronic equipment
US20050162435A1 (en) Image rendering with multi-level Z-buffers
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
US10424106B1 (en) Scalable computer image synthesis
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
HK40053926A (en) Picture display method, information generation method, device, equipment and storage medium
CN117710563A (en) Method for rasterizing-based differentiable renderer of semitransparent objects
Scheibel et al. Attributed vertex clouds
HK40070941A (en) Image rendering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053926

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 518102 201a-k49, 2nd floor, 101, 201a, 301, 401, building 1, yujingwan garden, Xin'an Sixth Road, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN TENCENT NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant