[go: up one dir, main page]

CN113947657B - Rendering method, device, equipment and storage medium of target model - Google Patents

Rendering method, device, equipment and storage medium of target model Download PDF

Info

Publication number
CN113947657B
CN113947657B CN202111210803.6A CN202111210803A CN113947657B CN 113947657 B CN113947657 B CN 113947657B CN 202111210803 A CN202111210803 A CN 202111210803A CN 113947657 B CN113947657 B CN 113947657B
Authority
CN
China
Prior art keywords
texture
target
channel
map
target model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111210803.6A
Other languages
Chinese (zh)
Other versions
CN113947657A (en
Inventor
杨己力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111210803.6A priority Critical patent/CN113947657B/en
Publication of CN113947657A publication Critical patent/CN113947657A/en
Application granted granted Critical
Publication of CN113947657B publication Critical patent/CN113947657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The application provides a rendering method, a rendering device, rendering equipment and a storage medium of a target model. The method comprises the following steps: obtaining a target model and a texture jigsaw, wherein the material of the target model is a combined material, vertex data of the target model comprises a material identifier, the texture jigsaw comprises a plurality of original texture maps used by a surface patch grid at the vertex identifier, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval respectively represents the value ranges of U coordinates and V coordinates; determining a UV sampling interval according to the material identification of the surface patch grid to be rendered and the number of original texture maps in the texture jigsaw; acquiring a target map from a plurality of texture puzzles with different texture levels which are generated in advance according to the distances and angles of the UV sampling interval and the patch grid in the visual window; and performing texture sampling on the target map according to the UV sampling interval so as to render the patch grid to obtain a rendered target model.

Description

Rendering method, device, equipment and storage medium of target model
Technical Field
The present application relates to game technologies, and in particular, to a method, an apparatus, a device, and a storage medium for rendering a target model.
Background
The object displayed in the game picture is a rendering effect of a model (hereinafter referred to as a target model) for the object, and the target model and a texture map used by the target model are manufactured in advance and rendered in a renderer of a terminal device of a user so as to achieve a final display effect on the terminal device of the user.
When the terminal equipment performs rendering according to the target model and the texture map, a central processing unit (Central Processing Unit, CPU) invokes a one-time graphic programming interface to command a graphic processor (Graphics Processing Unit, GPU) to perform one-time rendering on the target model according to one texture map. While a target model typically has many texture maps, the CPU needs to call the graphical programming interface multiple times. In order to reduce the CPU from calling the graphic programming interface, one implementation is to splice a plurality of tiles into texture tiles, and scale UV of the target model so that the area of the target model where the tiles need to be rendered corresponds to the position of the tiles in the texture tiles, thereby rendering a whole texture tile directly onto the target model.
However, the flexibility is low when rendering with the above-described texture tile scheme.
Disclosure of Invention
The application provides a rendering method, a rendering device, rendering equipment and a storage medium of a target model, which are used for solving the problem of lower flexibility of a rendering scheme in the prior art.
In a first aspect, the present application provides a rendering method of a target model, including: obtaining a target model and a texture jigsaw used for attaching the target model, wherein the material of the target model is a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture mapping used by a surface patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture mapping, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates; determining a UV sampling interval corresponding to the material identification of the patch grid according to the material identification of the patch grid to be rendered and the number of original texture maps in the texture jigsaw; acquiring target maps from a plurality of texture puzzles with different texture levels which are generated in advance according to the UV sampling interval and the distance and the angle of the patch grid in a visual window; and performing texture sampling on the target map according to the UV sampling interval so as to render the patch grid to obtain a rendered target model.
In a second aspect, the present application provides a rendering apparatus for a target model, including: the acquisition module is used for acquiring a target model and texture tiles attached to the target model, wherein the target model is made of combined materials, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying texture tiles used by a surface patch grid of the target model at the vertex, the texture tiles comprise a plurality of original texture tiles, UV coordinates of the texture tiles are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates; the determining module is used for determining a UV sampling interval corresponding to the material identification of the patch grid according to the material identification of the patch grid to be rendered and the number of original texture maps in the texture jigsaw; the acquisition module is further used for acquiring a target map from a plurality of texture puzzles with different texture levels, which are generated in advance, according to the UV sampling interval and the distance and the angle of the patch grid in the visual window; and the sampling module is used for performing texture sampling on the target map according to the UV sampling interval so as to render the patch grid to obtain a rendered target model.
In a third aspect, the present application provides an electronic device comprising: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to implement the method as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method according to the first aspect when executed by a processor.
According to the rendering method, the device, the equipment and the storage medium of the target model, the target model and the texture jigsaw used for attaching the target model are obtained, the target model is made of combined materials, the target model comprises a plurality of vertexes, vertex data are corresponding to each vertex, the vertex data comprise material identifiers, the material identifiers are used for identifying texture maps used by a patch grid of the target model at the vertexes, the texture jigsaw comprises a plurality of original texture maps, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing the value ranges of U coordinates and V coordinates; determining a UV sampling interval corresponding to the material identification of the surface patch grid according to the material identification of the surface patch grid to be rendered and the number of original texture maps in the texture jigsaw; obtaining a target map from a plurality of texture puzzles with different texture levels which are generated in advance according to the UV sampling interval and the distance and the angle of the texture map rendered on the surface patch grid in the visual window; and performing texture sampling on the target map according to the UV sampling interval to render the patch grid to obtain a rendered target model. The material of the target model is one material after combination, so that the number of Draw calls is reduced, then multiple materials of the target model can be recovered through the material identification recorded in the vertex data, so that corresponding texture maps are indexed in the texture jigsaw, further, rendering is carried out according to the single texture map, and the effect that rendering can be carried out according to the single texture map and the rendering flexibility is increased on the basis of reducing the number of Draw calls is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a prior art rendering process;
FIG. 2 is a flowchart of a rendering method of a target model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a rendering process of a target model according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a texture tile according to an embodiment of the present application;
fig. 5 is an exemplary diagram of generating a MIP map according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an exemplary processing according to a processed UV sampling interval according to an embodiment of the present application;
FIG. 7 is a schematic diagram of texture sampling according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a multi-layer texture blending principle according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a rendering device of a target model according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Term interpretation:
DCC software: digital content generation software. For three-dimensional artists, the commonly used DCC is houdini, maya, max, cinema4D, blender and other all-purpose three-dimensional software, and also can be Zbrush, nuke, substance Painter and other special tools.
Draw Call: the CPU invokes a graphics programming interface, such as GLDRAWELEMENTS commands in OpenGL or DrawlndexedPrimitive commands in DirectX, to command the GPU to perform rendering operations.
Slader: a shader.
MIP map: to speed up rendering and reduce image aliasing, the original texture map of the import engine is typically processed as a file consisting of a series of pre-computed and optimized pictures, referred to as a MIP map or a mipmap.
The game running on the terminal device includes a plurality of target models, such as a virtual character model, a building model and a lawn model, and each target model is finally displayed on the terminal device, and two stages of making and rendering are needed:
The manufacturing process of the target model comprises the following steps: modeling, UV spreading, material dividing, texture mapping and rendering;
modeling refers to making a target model in three-dimensional software.
UV spreading refers to spreading the object model into an image in a two-dimensional UV coordinate system.
UV coordinates refer to a plane where all images are two-dimensional. The horizontal direction is U and the vertical direction is V, and any one pixel on the image can be located by the two-dimensional UV coordinate system of the plane.
The material distribution refers to distributing materials to the target model.
Drawing a texture map refers to drawing a texture map for each image in the map drawing software.
Rendering refers to outputting a target model or scene as an image file or video signal.
When the renderer renders the target model according to the texture map, if the target model corresponds to a plurality of materials, the target model is rendered according to the texture map of one of the materials each time, and the CPU needs to call a graphical programming interface once to command the GPU to render the grid model. And after the target model is rendered according to the texture map of one material, rendering the target model according to the texture map corresponding to the other material, and repeating the steps until the target model is rendered.
As introduced above, for a GPU, it may be called multiple times, increasing the number of Draw calls and the frequency of texture switching, thereby affecting rendering efficiency.
One solution to the above problem is to package multiple texture maps into a texture map set. The principle is that the original texture map which is rendered by the GPU one by one is packed on one texture map, and the GPU only needs to be called once, so that the number of Draw calls can be reduced.
This is accomplished by a process that reduces and aligns the UV of the object model to the corresponding location of the atlas. For a specific principle, see the description of fig. 1:
Fig. 1 is a schematic diagram of a prior art rendering process.
As shown in fig. 1, the target model is a cube model, after UV development is performed on the cube model, a two-dimensional plan a of the cube model is obtained, an atlas is a two-dimensional image b with the same size and the same direction as the two-dimensional plan, and in the rendering process, the rendered cube model is obtained by attaching the two-dimensional image b to the two-dimensional plan a at one time.
Thus, it can be seen that, although the number of Draw calls is reduced, the flexibility in the rendering process is reduced, for example, the rendering process cannot be performed according to each map in the map set alone, so that the limitation of the rendering scheme is greater.
In order to solve the above technical problems, the present inventors propose the following technical ideas: the texture identifiers of the target model are recorded on the vertex color channel of the target model, and a plurality of materials of the same target model are combined into one material to be led into the game engine, so that the number of Draw calls can be reduced, and then the texture identifiers recorded before are used in the game engine to index texture maps in the texture jigsaw, so that the rendering of the target model is realized, and the rendering flexibility is improved while the number of Draw calls is reduced.
The target model rendering method provided by the embodiment of the application can be applied to an application scene comprising the first terminal equipment and the second terminal equipment. The method comprises the steps that a first terminal device draws a three-dimensional target model and a texture map attached to the target model according to an object to be displayed in a game scene, the three-dimensional target model and the texture map used by the model are sent to a second terminal device, the second terminal device calls the texture map to render the target model according to a game engine in the second terminal device, and the rendered target model is displayed through a graphical user interface.
In a common game scene, the first terminal device and the second terminal device can be smart phones, tablet computers, computers and the like. In a cloud game scenario, the second terminal device may also be a cloud server.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with reference to specific embodiments based on the above application scenario. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a rendering method of a target model according to an embodiment of the present application. As shown in fig. 2, the rendering method of the target model includes the following method steps:
S201, obtaining a target model and texture jigsaw attached to the target model, wherein the target model is made of combined materials, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying texture maps used by a patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of texture maps, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates.
The execution body of the present embodiment may be the second terminal device in fig. 1. The second terminal equipment acquires a target model and a texture map attached to the target model from the first terminal equipment.
Prior to this step, it is necessary to draw the target model by three-dimensional software, and draw the texture map attached to the target model by drawing software such as PS. The user may draw the target model and the texture map through the first terminal device, and send the drawn target model and texture map to the second terminal device for rendering in the second terminal device.
In this embodiment, the target model is a three-dimensional model created for a target (object) in a game scene such as a building, a virtual character, a lawn, or the like. The target model may be composed of a plurality of patch grids, and the intersecting position of two adjacent patch grids is the vertex of the target model. The vertex has vertex data, and the second terminal equipment can restore the target model according to the vertex data, wherein the vertex data comprises vertex color information of the vertex and material identification associated with the vertex color information.
In this embodiment, the texture identifier is recorded on the vertex color in the vertex data, so as to associate the texture identifier with the vertex color, which can also be understood as establishing a correspondence between the vertex color and the texture identifier.
The texture identifier can be determined according to the texture pattern and texture of the surface of the target model. For example, texture pattern A and texture a correspond to texture ID1 and texture pattern A and texture b correspond to texture ID2.
In some examples, for a triangular patch grid having three vertices, each vertex corresponds to vertex data, the vertex data for each vertex includes three-dimensional coordinates of the vertex, color information (vertex color) used by the vertex, UV coordinates, and the like. Typically, the color information used by the vertices includes RGB colors, and the texture identifier used by the patch grid may be recorded on the R-channel color, G-channel color, or B-channel color of any of the three vertices.
For the case that adjacent patch grids share vertices, the material identifiers of the adjacent patch grids can be distinguished by different vertices in the adjacent patch grids. For example, the patch grid a includes vertices a, B, and c, the patch grid B includes vertices a ', B ', and c ', where the vertices a and a ' are common vertices, and the material identifier of the patch grid a may be recorded by the vertex a, and the material identifier of the patch grid B may be recorded by the vertex a '.
The material identification is realized by dividing the material ID in the model making stage. The texture ID may be implemented by selecting a plane using the same texture in Maya software and then assigning a specified texture, or by using a multidimensional self texture in 3D Max software.
The texture jigsaw is a jigsaw obtained by splicing a plurality of texture maps, and is used for being attached to a target model, and detail information is added for the target model. The U coordinate of the texture jigsaw has a value range of 0 to 1, and the V coordinate has a value range of 0 to 1.
In addition, in this embodiment, the material of the object model is a material after being combined. The operation of merging materials can be completed in the model making stage. For example, after separating materials for a target model, obtaining multiple materials of the target model, and recording each material on a vertex color in vertex data, combining the multiple materials of the target model into one material by means of combining the materials.
S202, determining a UV sampling interval corresponding to the material identification of the surface patch grid according to the material identification of the surface patch grid to be rendered and the number of original texture maps in the texture jigsaw.
In this embodiment, it may be understood that the target model is converted into the UV coordinate system to obtain a two-dimensional image, where the two-dimensional image includes a plurality of image blocks, and each image block corresponds to a material identifier.
After the three-dimensional target model is mapped to the two-dimensional space to obtain a two-dimensional image, the UV coordinates of each image block in the two-dimensional image are located in a preset UV interval. That is, the range of U coordinates of each image block is between 0 and 1, and the range of V coordinates is also between 0 and 1.
In some alternative embodiments, the texture identifier of each image block of the two-dimensional image may be a number, and for a plurality of texture maps used by the target model, the texture maps may be ordered according to the order of the numbers of the image blocks, so as to correspond the image blocks to the texture maps. The sampling interval can be calculated according to the number of the image block, and the texture map corresponding to the sampling interval can be searched in the texture jigsaw according to the sampling interval, so that the texture map used by the image block can be determined.
Fig. 3 is a schematic diagram of a rendering process of a target model according to an embodiment of the present application.
As shown in fig. 3 (a), it is assumed that after the target model is converted into the UV coordinate system, a two-dimensional image including 16 image blocks is obtained, where the range of the U coordinate of each image block is between 0 and 1, and the range of the V coordinate is also between 0 and 1.
As shown in fig. 3 (b), the texture tile of the object model includes 8 texture maps, each corresponding to a number of maps (a number as shown in the figure), which numbers each texture map in a left-to-right, top-to-bottom order, numbered 0-7.
And the UV coordinates of the texture jigsaw lie in a preset UV interval. Namely, the U coordinate of the whole texture jigsaw has a value range of 0 to 1, and the V coordinate has a value range of 0 to 1.
Then the texture map with the map number 0 is attached to the image block with the texture index 0, and similarly, other texture maps with other map numbers can be attached to other image blocks. The sampling interval is used for determining the sampling interval of the texture map with the map number of 0 so as to sample the texture map in the sampling interval from the texture jigsaw, thereby realizing the rendering of the image block.
The result of the attachment is shown in fig. 3 (c), and it can be seen that a corresponding texture map is attached to each image block.
Fig. 4 is a schematic diagram of a texture tile according to an embodiment of the present application.
As shown in fig. 4, assuming that the texture identifier of the patch grid to be rendered is 02, the number of original texture tiles in the texture tile is 16, the range of U coordinates of the texture tile is between 0 and 1, and the range of V coordinates is between 0 and 1, the UV coordinates of the UV sampling interval corresponding to the texture identifier 02 of the patch grid to be rendered are (0.5, 0), (0.75,0), (0.5, 0.25) and (0.75,0.25) from left to right and from top to bottom in sequence.
S203, acquiring target maps from a plurality of texture puzzles with different texture levels, which are generated in advance, according to the distances and angles of the UV sampling interval and the patch grid in the visual window.
Prior to this step, a MIP map is generated from the original texture tile, and the MIP map includes a plurality of texture tiles of different sizes.
Fig. 5 is an exemplary diagram of generating a MIP map according to an embodiment of the present application.
As shown in fig. 5, assuming that the size of the original texture tile is 256×256, a series of texture tiles from large to small are generated according to the size 256×256 of the original texture tile, each texture tile has a size half that of the previous texture tile, until the size of the original texture tile is reduced to 1*1, and the series of texture tiles with different sizes are MIP maps.
And then, in the rendering process, calculating a target texture level according to the distance, the angle and the UV sampling interval displayed in the visual window by the map, and determining a texture picture corresponding to the target texture level from a plurality of texture picture according to the target texture level as a target picture.
The step S203 may have at least three different embodiments as follows:
in some alternative embodiments, step S203 includes:
and a1, if the repeated tiling frequency of the patch grid is 1, performing texture mapping (Multum In Parvo, MIP) calculation according to the distances and angles of the UV sampling interval and the patch grid in the visual window to obtain a first target texture level.
Wherein, the texture mapping (Multum In Parvo, MIP) calculation is performed according to the distance and angle of the UV sampling interval and the patch grid in the visual window, which is referred to the description of the related art and will not be described in detail here.
A2, acquiring a first target jigsaw from a plurality of pre-generated texture jigsaw with different texture levels according to the first target texture level.
Illustratively, the texture levels may be represented by numerical numbers, e.g., number 0 represents the original texture tile, number 1 represents the texture tile that is reduced by half from the original texture tile, and so on, to obtain the texture level for each texture tile.
If the first target texture level is 01, obtaining a target jigsaw corresponding to the texture level 01, and obtaining the first target jigsaw.
In other alternative embodiments, step S203 includes:
And b1, if the repeated tiling times of the patch grid are greater than 1, processing the UV sampling interval to obtain a processed UV sampling interval, wherein U coordinates in the processed UV sampling interval are continuous coordinates, and V coordinates are continuous coordinates.
Specifically, the UV sampling interval may be processed such that the interval of the U coordinate is between 0 and 1 x the number of repeated tiles, and the interval of the V coordinate is between 0 and 1 x the number of repeated tiles. Step b1 is described in detail below with reference to the accompanying drawings:
FIG. 6 is a diagram illustrating an exemplary processing according to the processed UV sampling interval according to an embodiment of the present application.
As shown in fig. 6, assuming that the UV coordinates of the UV sampling interval are (0.5, 0), (0.75,0), (0.5, 0.25) and (0.75,0.25) from left to right and from top to bottom, the number of repeated tiling of the patch grid is 2 (2 repeated tiling is performed on the U-axis and the V-axis respectively), it can be seen that the UV coordinates of each UV sampling interval are (0.5, 0), (0.75,0), (0.5, 0.25) and (0.75,0.25), and thus a seam appears at the texture junction during the MIP calculation process. Therefore, the application processes the UV sampling interval of the patch grid to be (0, 0), (2, 0), (0, 2) and (2, 2) from left to right and from top to bottom according to the situation that the textures are repeatedly tiled, and performs MIP calculation in the shader loader according to (0, 0), (2, 0), (0, 2) and (2, 2) to obtain a second target texture level.
And b2, performing MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a second target texture level.
The specific embodiment of step b2 may be referred to in the description of step a 1.
And b3, acquiring a second target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the second target texture level, wherein the texture level of the second target mapping is the second target texture level.
The specific embodiment of step b3 may be referred to as the description of step a 2.
In yet other alternative embodiments, step S203 includes:
and c1, processing the UV sampling interval to obtain a processed UV sampling interval.
The specific embodiment of step c1 may be referred to as description of step b 1.
And c2, performing MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a third target texture level.
The specific embodiment of step c1 may be referred to as description of step a 1.
And c3, acquiring a third target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the third target texture level, wherein the texture level of the third target mapping is the third target texture level.
The specific embodiment of step c3 may be referred to as the description of step a 2.
As can be seen from the three embodiments of step S203, in the first two embodiments, it is necessary to determine whether to use the coordinates of the UV sampling interval before processing or the coordinates of the UV sampling interval after processing in MIP calculation according to the number of times the texture is repeatedly tiled. In the third embodiment, the MIP calculation is performed according to the coordinates of the processed UV sampling interval, regardless of the number of times of repeated tiling of the texture, i.e., whether repeated tiling of the texture is required or not.
S204, performing texture sampling on the target jigsaw according to the UV sampling interval, and rendering the panel grids to obtain a rendered target model.
Specifically, texture sampling is performed according to the UV coordinates of the patch grid and the size of the target tile.
Fig. 7 is a schematic diagram of texture sampling according to an embodiment of the present application. As shown in fig. 7, assuming that UV coordinates of the UV sampling interval are (0.5, 0), (0.75,0), (0.5, 0.25) and (0.75,0.25) from left to right and from top to bottom, a texture map with UV coordinates of (0.5, 0), (0.75,0), (0.5, 0.25) and (0.75,0.25) from left to right and from top to bottom is sampled in the target tile, and the tile grid is rendered to obtain a rendered target model.
According to the embodiment, a target model and a texture jigsaw used for attaching the target model are obtained, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprise material identifiers, the material identifiers are used for identifying texture maps used by a surface patch grid of the target model at the vertexes, the texture jigsaw comprises a plurality of original texture maps, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates; determining a UV sampling interval corresponding to the material identification of the surface patch grid according to the material identification of the surface patch grid to be rendered and the number of original texture maps in the texture jigsaw; obtaining target chartlet from a plurality of pre-generated texture chartlets with different texture levels according to the UV sampling interval and the distance and the angle of the texture chartlet rendered on the surface patch grid in the visual window; and performing texture sampling on the target map according to the UV sampling interval to render the patch grid to obtain a rendered target model. The material of the target model is one material after combination, so that the number of Draw calls is reduced, then multiple materials of the target model can be recovered through the material identification recorded in the vertex data, so that corresponding texture maps are indexed in the texture jigsaw, further, rendering is carried out according to the single texture map, and the effect that rendering can be carried out according to the single texture map and the rendering flexibility is increased on the basis of reducing the number of Draw calls is achieved.
The above embodiment describes a scene in which the texture of the patch grid is a single-layer texture, and in practical application, there is a scene in which the texture of the patch grid is a mixture of multiple layers of textures, and for this scene, it can be understood that each patch grid includes multiple layers of images to be rendered, each layer of images to be rendered corresponds to a texture map, vertex data of an image block includes an identifier of each layer of images to be rendered, and each layer of texture map in the images to be rendered corresponds to a material identifier, where the method of this embodiment further includes the following method steps: and determining a texture map corresponding to the image to be rendered in the texture jigsaw according to the material identifier corresponding to the color channel of the image to be rendered for each layer of the images to be rendered in the multiple layers.
For different layers in the patch grid to be rendered in a multi-layer texture hybrid scene, the material identification can be recorded in the following two different ways.
In an optional implementation manner, if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the target model includes a first color channel associated identifier and a second color channel associated identifier, where the first color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, and the second color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, and the first color channel and the second color channel are different color channels; the marks associated with the first color channels are used for marking texture puzzles used by the bottom layer images; the identification associated with the second color channel is used to identify texture tiles used by non-underlying images.
For example, a first texture identifier may be recorded on the R channel of the vertex color to identify texture maps for use with underlying images, and a second texture identifier may be recorded on the B channel of the vertex color to identify texture maps for use with non-underlying images.
In another optional implementation manner, if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the target model includes a first color channel-associated identifier and a first color channel-associated remapping identifier, where the first color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map; the identification associated with the first color channel is used for identifying texture tiles used by the underlying image area; the remap logo is used to represent texture tiles used by non-underlying image areas. The remapping algorithm may be referred to in the related art and will not be described here.
For each layer of image to be rendered, the rendering may be performed by adopting the embodiment shown in fig. 2, and specifically, reference may be made to the description of the embodiment shown in fig. 2, which is not repeated herein.
Fig. 8 is a schematic diagram of a multi-layer texture blending principle according to an embodiment of the present application.
As shown in fig. 8, firstly, a texture map corresponding to Id0 is searched in a texture jigsaw according to a material identifier Id0, and an image to be rendered of a first layer is rendered, secondly, a texture map corresponding to Id1 is searched in the texture jigsaw according to a material identifier Id1, and an image to be rendered of a second layer is rendered, secondly, a texture map corresponding to Id2 is searched in the texture jigsaw according to a material identifier Id2, and an image to be rendered of a third layer is rendered. The images to be rendered of the first layer, the second layer and the third layer are arranged in the upward direction from the bottom layer.
The embodiment aims at materials mixed by multiple layers of detail textures, such as terrain materials, and the designated textures appear in a designated area under the control of drawn vertex colors in an engine. While in this embodiment, vertex colors are used as index IDs, a variety of different texture maps can be used for each layer of texture. For example, assuming that the first layer of texture of an image is one of the 16 texture maps of the texture tile, then the first layer of texture corresponds to 16 possible selectivities, then the second layer of texture of the image corresponds to 15 possible selectivities (excluding the texture map that the first layer of texture has selected), and so on, assuming a total of 3 layers of texture for the image, the image may eventually have 45 texture blending approaches. Therefore, the purpose of obtaining rich surface texture expression by using fewer samples is achieved, each detail texture can be tiled, the fineness of the surface is increased, and more real surface rendering is obtained.
Further, in some embodiments, the texture identifier may also correspond to texture parameters, such as the number of repeated tiling, UV translation, rotation parameters, and the like. So that in the rendering stage, texture parameters can be used for a single texture map, further increasing the flexibility of rendering.
In the related art, some object models composed of more components are often encountered, and each component is of a different texture or texture type, and needs to be split into a plurality of materials. According to the conventional flow, the art producer needs to evaluate whether to merge some materials to reduce the number of materials, because too many materials can increase the Draw Call number, which affects the rendering efficiency. In the case of the present embodiment, however, the art producer does not need to consider these. For example, a building model containing a large number of structures is composed of a plurality of materials, and an art producer only needs to divide the building model into a plurality of material IDs according to the texture and the texture pattern of the materials. For example, assuming that 13 materials are used for the building model, the present embodiment may combine them into 1 material in the model creation stage, so that the material of the target model introduced into the game engine is one material, that is, the target model acquired in step S201 is one material, thereby reducing the number of Draw calls. Then, in the UV development and texture drawing part, the material is manufactured in a conventional manner. For example, the effect tiling (repeated tiling) can be created by UV framing, enabling control of UV translation, scaling, etc. using the texture parameters, and the art creator can draw texture maps according to the normal flow at the stage of creation of the target model, without the need to pre-synthesize the atlas. And the final effect can be previewed in real time in the processes of adjusting UV, debugging materials and drawing the map in the DCC software.
In the model asset manufacturing stage, the method of the embodiment does not influence the existing manufacturing flow, and ensures that the production of the model asset can be rapidly and smoothly carried out in batches. And simultaneously, the freedom degree exerted by an art producer can be increased, for example, a large number of material balls or a large number of detail textures are used on the same target model. The artwork maker may also use the material parameters in DCC software to control texture tiling, translation, rotation, etc. In this embodiment, each texture identifier may correspond to a texture parameter, so that the engine can distinguish the UV range controlled by each texture ID from the texture parameters used in the UV range according to the recorded texture ID. For example, a Tiling value, a parameter of UV translation, a parameter of UV rotation, and the like corresponding to each material ID are obtained by using an array index mode.
After model asset fabrication is completed, a large number of materials can be combined, and the number of DrawCall finally rendered is reduced. The number of textures is reduced through the packed texture jigsaw, so that the switching frequency of the textures in final rendering is reduced.
And in the engine effect implementation stage, material index is realized through a loader algorithm, and the material texture precision in the DCC software is restored. For materials mixed by multi-layer textures, the number of texture sampling times can be reduced, and a small amount of sampling is used for obtaining various texture combinations.
On the basis of the embodiment of the method for rendering the object model, fig. 9 is a schematic structural diagram of a device for rendering the object model according to an embodiment of the present application. As shown in fig. 9, the rendering device of the object model includes: an acquisition module 91, a determination module 92 and a sampling module 93;
The obtaining module 91 is configured to obtain a target model and a texture tile attached to the target model, where a material of the target model is a combined material, the target model includes a plurality of vertices, each vertex corresponds to vertex data, the vertex data includes a material identifier, the material identifier is used to identify a texture tile used by a patch grid of the target model at the vertex, the texture tile includes a plurality of original texture tiles, UV coordinates of the texture tile are located in a preset UV interval, and the UV interval is used to represent a value range of U coordinates and V coordinates respectively;
A determining module 92, configured to determine, according to a material identifier of a patch grid to be rendered and the number of original texture maps in the texture puzzle, a UV sampling interval corresponding to the material identifier of the patch grid;
The obtaining module 91 is further configured to obtain a target map from a plurality of texture tiles with different texture levels that are generated in advance according to the UV sampling interval and the distance and the angle of the patch grid in the visual window;
and the sampling module 93 is configured to perform texture sampling on the target map according to the UV sampling interval, so as to render the patch grid to obtain a rendered target model.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of texture tiles with different texture levels, which are generated in advance, according to the UV sampling interval, the distance and the angle of the patch grid in the visual window, and specifically includes: if the repeated tiling frequency of the patch grid is 1, performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a first target texture level; and acquiring a first target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the first target texture level, wherein the texture level of the first target mapping is the first target texture level.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of texture tiles with different texture levels, which are generated in advance, according to the UV sampling interval, the distance and the angle of the patch grid in the visual window, and specifically includes: if the repeated tiling times of the patch grids are greater than 1, processing the UV sampling interval to obtain a processed UV sampling interval; performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a second target texture level; and acquiring a second target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the second target texture level, wherein the texture level of the second target mapping is the second target texture level.
In some embodiments, the obtaining module 91 obtains the target map from a plurality of texture tiles with different texture levels generated in advance according to the UV sampling interval, the distance and the angle of the patch grid in the visual window, including: processing the UV sampling interval to obtain a processed UV sampling interval; performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a third target texture level; and acquiring a third target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the third target texture level, wherein the texture level of the third target mapping is the third target texture level.
In some embodiments, the range of values of the U coordinates is between 0 and 1, and the range of values of the V coordinates is between 0 and 1.
In some embodiments, if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the target model includes a first color channel-associated identifier and a second color channel-associated identifier, where the first color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, and the second color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, and the first color channel and the second color channel are different color channels; the marks associated with the first color channels are used for marking texture puzzles used by the bottom layer images; the identification associated with the second color channel is used to identify texture tiles used by non-underlying images.
In some embodiments, if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the target model includes a first color channel-associated identifier and a first color channel-associated remapping identifier, where the first color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map; the identification associated with the first color channel is used for identifying texture tiles used by the underlying image area; the remap logo is used to represent texture tiles used by non-underlying image areas.
In some embodiments, the texture identifier includes a texture pattern and a texture.
The rendering device of the target model provided by the embodiment of the application can be used for executing the technical scheme of the rendering method of the target model in the embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the determining module 92 may be a processing element that is set up separately, may be implemented in a chip of the above apparatus, or may be stored in a memory of the above apparatus in the form of program codes, and may be called by a processing element of the above apparatus to execute the functions of the determining module 92. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device may include: a transceiver 101, a processor 102, a memory 103.
Processor 102 executes the computer-executable instructions stored in the memory, causing processor 102 to perform the aspects of the embodiments described above. The processor 102 may be a general-purpose processor including a central processing unit CPU, a network processor (network processor, NP), etc.; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
The memory 103 is coupled to the processor 102 via a system bus and communicates with each other, and the memory 103 is adapted to store computer program instructions.
The transceiver 101 may be used to obtain a target model and texture tiles attached to the target model.
The system bus may be a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., clients, read-write libraries, and read-only libraries). The memory may include random access memory (random access memory, RAM) and may also include non-volatile memory (non-volatile memory).
The electronic device provided by the embodiment of the present application may be the second terminal device of the foregoing embodiment.
The embodiment of the application also provides a chip for running the instruction, which is used for executing the technical scheme of the rendering method of the target model in the embodiment.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, and when the computer instructions run on a computer, the computer is caused to execute the technical scheme of the rendering method of the target model in the embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program stored in a computer readable storage medium, wherein at least one processor can read the computer program from the computer readable storage medium, and the technical scheme of the rendering method of the target model in the embodiment can be realized when the at least one processor executes the computer program.
The rendering method of the target model in the embodiment of the application can be operated on local terminal equipment or a cloud interaction system.
The cloud interaction system comprises a cloud server and user equipment and is used for running cloud applications. Cloud applications run separately.
In an alternative embodiment, cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game program and a game picture presentation main body are separated, the storage and the running of the object control method are completed on a cloud game server, and the functions of a cloud game client are used for receiving and sending data and presenting game pictures, for example, the cloud game client can be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer and the like, which is close to a user side; but the cloud game server that performs game data processing is a cloud. When playing a game, a player operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
In an alternative embodiment, the local terminal device stores a game program and is used to present game pictures. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A method of rendering a target model, comprising:
Obtaining a target model and a texture jigsaw used for attaching the target model, wherein the material of the target model is a combined material, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying a texture mapping used by a surface patch grid of the target model at the vertex, the texture jigsaw comprises a plurality of original texture mapping, UV coordinates of the texture jigsaw are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates;
Determining a plurality of image blocks in a two-dimensional image of the target model, calculating a UV sampling interval according to the number of each image block, and searching a texture map corresponding to the sampling interval in the texture map according to the UV sampling interval so as to determine the texture map used by the image block; acquiring target maps from a plurality of texture puzzles with different texture levels which are generated in advance according to the UV sampling interval and the distance and the angle of the patch grid in a visual window;
And performing texture sampling on the target map according to the UV sampling interval so as to render the patch grid to obtain a rendered target model.
2. The method of claim 1, wherein the obtaining the target map from the pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
if the repeated tiling frequency of the patch grid is 1, performing texture mapping MIP calculation according to the UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a first target texture level;
And acquiring a first target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the first target texture level, wherein the texture level of the first target mapping is the first target texture level.
3. The method of claim 1, wherein the obtaining the target map from the pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
If the repeated tiling times of the patch grids are greater than 1, processing the UV sampling interval to obtain a processed UV sampling interval;
Performing texture mapping MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a second target texture level;
And acquiring a second target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the second target texture level, wherein the texture level of the second target mapping is the second target texture level.
4. The method of claim 1, wherein the obtaining the target map from the pre-generated texture tiles of different texture levels according to the UV sampling interval, the distance and the angle of the patch grid in the visual window comprises:
processing the UV sampling interval to obtain a processed UV sampling interval;
performing texture mapping MIP calculation according to the processed UV sampling interval and the distance and angle of the patch grid in the visual window to obtain a third target texture level;
And acquiring a third target mapping from a plurality of texture puzzles with different pre-generated texture levels according to the third target texture level, wherein the texture level of the third target mapping is the third target texture level.
5. The method of any one of claims 1-4, wherein the range of values for the U coordinate is between 0 and 1 and the range of values for the V coordinate is between 0 and 1.
6. The method of claim 5, wherein if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the object model includes a first color channel associated identifier and a second color channel associated identifier, the first color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, the second color channel is any one color channel of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of a texture map, a G channel of a texture map, and a B channel of a texture map, and the first color channel and the second color channel are different color channels;
The marks associated with the first color channels are used for marking texture puzzles used by the bottom layer images;
The identification associated with the second color channel is used to identify texture tiles used by non-underlying images.
7. The method of claim 5, wherein if the texture map corresponding to the patch grid is a multi-layer texture map, the vertex data of the object model includes a first color channel-associated identifier and a first color channel-associated remap identifier, where the first color channel is any one of an R channel of a vertex color, a G channel of a vertex color, a B channel of a vertex color, an R channel of the texture map, a G channel of the texture map, and a B channel of the texture map;
the identification associated with the first color channel is used for identifying texture tiles used by the underlying image area;
The remap logo is used to represent texture tiles used by non-underlying image areas.
8. The method of any one of claims 1-4, wherein the texture identifier comprises a texture pattern and texture.
9. A rendering apparatus of a target model, comprising:
The acquisition module is used for acquiring a target model and texture tiles attached to the target model, wherein the target model is made of combined materials, the target model comprises a plurality of vertexes, each vertex corresponds to vertex data, the vertex data comprises a material identifier, the material identifier is used for identifying texture tiles used by a surface patch grid of the target model at the vertex, the texture tiles comprise a plurality of original texture tiles, UV coordinates of the texture tiles are located in a preset UV interval, and the UV interval is used for respectively representing a value range of U coordinates and V coordinates;
The determining module is used for determining a plurality of image blocks in the two-dimensional image of the target model, calculating a UV sampling interval according to the number of each image block, and searching a texture map corresponding to the sampling interval in the texture map according to the UV sampling interval so as to determine the texture map used by the image block;
The acquisition module is further used for acquiring a target map from a plurality of texture puzzles with different texture levels, which are generated in advance, according to the UV sampling interval and the distance and the angle of the patch grid in the visual window;
And the sampling module is used for performing texture sampling on the target map according to the UV sampling interval so as to render the patch grid to obtain a rendered target model.
10. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
The memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-8.
11. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-8.
CN202111210803.6A 2021-10-18 2021-10-18 Rendering method, device, equipment and storage medium of target model Active CN113947657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111210803.6A CN113947657B (en) 2021-10-18 2021-10-18 Rendering method, device, equipment and storage medium of target model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111210803.6A CN113947657B (en) 2021-10-18 2021-10-18 Rendering method, device, equipment and storage medium of target model

Publications (2)

Publication Number Publication Date
CN113947657A CN113947657A (en) 2022-01-18
CN113947657B true CN113947657B (en) 2024-07-23

Family

ID=79331309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111210803.6A Active CN113947657B (en) 2021-10-18 2021-10-18 Rendering method, device, equipment and storage medium of target model

Country Status (1)

Country Link
CN (1) CN113947657B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494562A (en) * 2022-01-20 2022-05-13 北京中航双兴科技有限公司 Data processing method and device for terrain rendering
CN114677471A (en) * 2022-01-27 2022-06-28 浙江慧脑信息科技有限公司 A method of controlling the texture array of 3D model
CN114429523B (en) * 2022-02-10 2024-05-14 浙江慧脑信息科技有限公司 Method for controlling partition mapping of three-dimensional model
CN114782606B (en) * 2022-05-12 2025-06-24 网易(杭州)网络有限公司 Texture mapping expansion method and device for voxel model, electronic equipment, and medium
CN115880468A (en) * 2022-11-30 2023-03-31 北京蔚领时代科技有限公司 A model UV method and device for the "Sweep" node in Houdini
CN115937026A (en) * 2022-12-02 2023-04-07 网易(杭州)网络有限公司 Texture modifying method, device, electronic device, and computer-readable storage medium
CN116561081B (en) * 2023-07-07 2023-12-12 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment, storage medium and program product
CN116883575B (en) * 2023-09-08 2023-12-26 腾讯科技(深圳)有限公司 Building group rendering method, device, computer equipment and storage medium
CN117011492B (en) * 2023-09-18 2024-01-05 腾讯科技(深圳)有限公司 Image rendering method and device, electronic equipment and storage medium
CN117541698B (en) * 2023-11-13 2024-08-20 深圳市塞外科技有限公司 Method, device, terminal and medium for adaptively rendering sector diagram to 3D model
CN117745974B (en) * 2024-02-19 2024-05-10 潍坊幻视软件科技有限公司 Method for dynamically generating rounded rectangular grid
CN118096981B (en) * 2024-04-22 2024-08-06 山东捷瑞数字科技股份有限公司 Mapping processing method, system and equipment based on dynamic change of model
CN119271640B (en) * 2024-12-10 2025-03-28 浙江中控信息产业股份有限公司 Model data processing method and model data processing device

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102651126B1 (en) * 2016-11-28 2024-03-26 삼성전자주식회사 Graphic processing apparatus and method for processing texture in graphics pipeline
CN108176048B (en) * 2017-11-30 2021-02-19 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN109448099B (en) * 2018-09-21 2023-09-22 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
CN109242961B (en) * 2018-09-26 2021-08-10 北京旷视科技有限公司 Face modeling method and device, electronic equipment and computer readable medium
CN109377545B (en) * 2018-09-28 2022-06-24 武汉艺画开天文化传播有限公司 Alembic-based model sharing and rendering method and electronic terminal
CN109671158A (en) * 2018-11-01 2019-04-23 苏州蜗牛数字科技股份有限公司 A kind of optimization method of game picture
CN109685869B (en) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 Virtual model rendering method and device, storage medium and electronic equipment
CN109816762B (en) * 2019-01-30 2023-08-22 网易(杭州)网络有限公司 Image rendering method and device, electronic equipment and storage medium
CN109961498B (en) * 2019-03-28 2022-12-13 腾讯科技(深圳)有限公司 Image rendering method, device, terminal and storage medium
CN110533755B (en) * 2019-08-30 2021-04-06 腾讯科技(深圳)有限公司 Scene rendering method and related device
CN110570505B (en) * 2019-09-11 2020-11-17 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium
EP3792876A1 (en) * 2019-09-13 2021-03-17 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Apparatus, method and computer program for rendering a visual scene
CN111028361B (en) * 2019-11-18 2023-05-02 杭州群核信息技术有限公司 Three-dimensional model, material merging method, device, terminal, storage medium and rendering method
CN111476877B (en) * 2020-04-16 2024-01-26 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN111540024B (en) * 2020-04-21 2024-02-23 网易(杭州)网络有限公司 Model rendering method and device, electronic equipment and storage medium
CN111508053B (en) * 2020-04-26 2023-11-28 网易(杭州)网络有限公司 Rendering method and device of model, electronic equipment and computer readable medium
CN112138386B (en) * 2020-09-24 2024-12-03 网易(杭州)网络有限公司 Volume rendering method, device, storage medium and computer equipment
CN112215934B (en) * 2020-10-23 2023-08-29 网易(杭州)网络有限公司 Game model rendering method and device, storage medium and electronic device
CN112316420B (en) * 2020-11-05 2024-03-22 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112598770B (en) * 2020-12-22 2023-08-08 福建天晴数码有限公司 Real-time decal rendering method and system based on model three-dimensional coordinate space
CN112619154B (en) * 2020-12-28 2024-07-19 网易(杭州)网络有限公司 Virtual model processing method and device and electronic device
CN112652044B (en) * 2021-01-05 2024-06-21 网易(杭州)网络有限公司 Particle special effect rendering method, device, equipment and storage medium
CN112785674B (en) * 2021-01-22 2024-08-30 北京蔚海灿娱数字科技有限公司 Texture map generation method, rendering device, equipment and storage medium
CN112802172B (en) * 2021-02-24 2024-03-01 网易(杭州)网络有限公司 Texture mapping method and device for three-dimensional model, storage medium and computer equipment
CN112785679B (en) * 2021-03-15 2024-11-08 网易(杭州)网络有限公司 Crystal model rendering method and device, computer storage medium, and electronic device
CN112933597B (en) * 2021-03-16 2022-10-14 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112915536B (en) * 2021-04-02 2024-03-22 网易(杭州)网络有限公司 Virtual model rendering method and device
CN113077539B (en) * 2021-04-08 2022-06-14 网易(杭州)网络有限公司 Target virtual model rendering method and device and electronic equipment
CN113398583A (en) * 2021-07-19 2021-09-17 网易(杭州)网络有限公司 Applique rendering method and device of game model, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113947657A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN113947657B (en) Rendering method, device, equipment and storage medium of target model
US7400331B2 (en) Apparatus and methods for texture mapping
CN107358649B (en) Processing method and device of terrain file
US9589386B2 (en) System and method for display of a repeating texture stored in a texture atlas
US10217259B2 (en) Method of and apparatus for graphics processing
GB2392072A (en) Generating shadow image data of a 3D object
CN113289334B (en) Game scene display method and device
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
US7158133B2 (en) System and method for shadow rendering
CN107610225A (en) A kind of oblique photograph outdoor scene threedimensional model monomerization approach
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN113457161A (en) Picture display method, information generation method, device, equipment and storage medium
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
CN113132799B (en) Video playing processing method and device, electronic equipment and storage medium
US20230186575A1 (en) Method and apparatus for combining an augmented reality object in a real-world image
CN108171784B (en) Rendering method and terminal
CN114299202B (en) Processing method and device, storage medium and terminal for virtual scene production
CN117710549A (en) Rendering methods and equipment
CN113181642A (en) Method and device for generating wall model with mixed material
CN105224325A (en) Rendering intent and device
US20240193864A1 (en) Method for 3d visualization of sensor data
CN104361622A (en) Interface drawing method and device
HK40048370B (en) Video playing processing method and apparatus, electronic device and storage medium
HK40048370A (en) Video playing processing method and apparatus, electronic device and storage medium
CN118941685A (en) Effect superposition method, device, electronic device, storage medium and program product based on unity engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant