[go: up one dir, main page]

CN111462313B - Method, device and terminal for realizing fluff effect - Google Patents

Method, device and terminal for realizing fluff effect Download PDF

Info

Publication number
CN111462313B
CN111462313B CN202010257348.4A CN202010257348A CN111462313B CN 111462313 B CN111462313 B CN 111462313B CN 202010257348 A CN202010257348 A CN 202010257348A CN 111462313 B CN111462313 B CN 111462313B
Authority
CN
China
Prior art keywords
object model
initial object
fluff
initial
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010257348.4A
Other languages
Chinese (zh)
Other versions
CN111462313A (en
Inventor
李展钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010257348.4A priority Critical patent/CN111462313B/en
Publication of CN111462313A publication Critical patent/CN111462313A/en
Application granted granted Critical
Publication of CN111462313B publication Critical patent/CN111462313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a method, a device and a terminal for realizing a fluff effect, and relates to the technical field of computer graphic images. Each copied initial object model sequentially extrudes corresponding preset offset along the normal direction of the bottom initial object model relative to the bottom initial object model; and combining the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect. The method can directly obtain the target object model for simulating the fluff effect, can simulate the fluff with a multi-layer effect without multi-channel rendering, and relieves the problems of large performance consumption and larger limitation of the method for simulating the fluff effect by the layer rendering in the prior art, thereby realizing the technical effects of reducing the rendering times and lowering the performance consumption and configuration requirements.

Description

Method, device and terminal for realizing fluff effect
Technical Field
The present invention relates to the field of computer graphics image processing technologies, and in particular, to a method, an apparatus, and a terminal for implementing a fluff effect.
Background
With the development of image processing technology, three-dimensional drawing technology is increasingly applied to computer drawing. Layer rendering techniques are widely used in the rendering of naps due to their own characteristics.
The current layer rendering technology mainly renders the hair length according to the layers, utilizes the multi-channel rendering function, renders one layer by each channel, uses the normal to extrude the vertex position out of the surface of the model, and the rendering effect is better when the number of channels and the number of layers used is larger. However, as the number of layers of the fluff is larger, each channel is rendered once, so that the number of rendering times is larger, the performance consumption is large, the scheme is very dependent on the multi-channel rendering function, and if the engine does not provide the multi-channel rendering function, the fluff effect cannot be realized by the scheme, namely, the problem that the performance consumption is large and the limitation is large in the method for simulating the fluff effect by the layer rendering in the prior art exists.
Disclosure of Invention
The invention aims to provide a method, a device and a terminal for realizing a fluff effect, so as to solve the problems of high performance consumption and high limitation in a method for rendering a simulated fluff effect in the middle layer in the prior art.
In a first aspect, an embodiment provides a method for implementing a fluff effect, where the method includes: acquiring an initial object model of a fluff effect to be simulated; in response to a model replication instruction, replicating the initial object models to obtain a preset number of initial object models, and sequentially extruding corresponding preset offset amounts of each initial object model obtained through replication relative to the bottom initial object model along the normal direction of the bottom initial object model; and combining the preset number of initial object models into one model to obtain a target object model for simulating the fluff effect.
In one possible embodiment, the method further comprises: and responding to the vertex index setting instruction, sequentially carrying out index marking on the vertexes of each initial object model to obtain index numbers of the vertexes of each initial object model, wherein the index numbers are used for representing the rendering sequence of each initial object model.
In one possible implementation manner, sequentially indexing the vertices of each initial object model to obtain an index number of the vertex of each initial object model, including: obtaining a second set of map coordinates of each initial object model; and respectively indexing and marking the vertexes of the corresponding initial object models according to the obtained second set of mapping coordinates of each initial object model to obtain the index numbers of the vertexes of each initial object model.
In one possible embodiment, the method further comprises: and sequentially storing index numbers of the vertexes of each initial object model in an index cache region.
In one possible embodiment, the method further comprises: in response to a second set of map coordinates setting operation on each initial object model, values of the second set of map coordinates of each initial object model are set, the second set of map coordinates being used to save the increment for the preset offset.
In one possible implementation, setting the values of the second set of map coordinates for each initial object model includes: and setting the value of the second set of map coordinates of each initial object model according to the preset multiple of the preset offset corresponding to each initial object model.
In one possible embodiment, the method further comprises: responding to the fluff material setting operation, setting material parameters corresponding to each initial object model, wherein the material parameters comprise at least one of the following: transparency of the initial object model, hair concentration, hair permeability, hair color intensity, target offset of the initial object model relative to the underlying initial object model.
In one possible implementation manner, when the material parameter is transparency of the initial object model, setting the material parameter corresponding to each initial object model includes: determining a transparency value of a corresponding initial object model according to a preset fluff noise wave map and a second set of map coordinates of each initial object model; the transparency of each initial object model is set according to the obtained transparency value.
In one possible implementation, when the material parameter is a target offset of the initial object model relative to the underlying initial object model, setting the material parameter corresponding to each initial object model includes: acquiring a value of a second set of map coordinates of each initial object model and a corresponding preset offset; and obtaining the target offset of each initial object model relative to the bottom initial object model according to the value of the second set of map coordinates corresponding to each initial object model and the preset offset.
In a second aspect, an embodiment provides a method for implementing a fluff effect, where the method includes: acquiring a target object model of a fluff effect to be simulated, wherein the target object model comprises a plurality of layers of initial object models, and the initial object models except the initial object model positioned at the bottom layer in the plurality of layers of initial object models are sequentially extruded by corresponding preset offset relative to the initial object model at the bottom layer along the normal direction of the initial object model at the bottom layer; acquiring a fluff material file corresponding to the target object model; and rendering the target object model according to the fluff material file.
In one possible implementation, rendering the target object model according to the nap material file includes: acquiring index numbers of vertexes of each initial object model; and rendering each initial object model in the target object model according to the sequence of index numbers of the vertexes of each initial object model and the fluff material files in turn.
In one possible implementation, obtaining the index number of the vertex of each initial object model includes: and obtaining the index number of the vertex of each initial object model from an index cache area, wherein the index cache area is used for storing the second set of mapping coordinates of each initial object model.
In one possible embodiment, the target offset of each initial object model in the pile material file relative to the underlying initial object model is obtained according to the value of the second set of map coordinates of each initial object model and the corresponding preset offset.
In one possible embodiment, the transparency of each initial object model in the fluff material file is obtained according to a preset fluff noise wave map and a second set of map coordinates of each initial object model.
In a third aspect, an embodiment provides a device for implementing a fluff effect, where the device includes: the first acquisition module is used for acquiring an initial object model of the fluff effect to be simulated; the copying module is used for copying the initial object models in response to the model copying instruction to obtain a preset number of initial object models, and each copied initial object model sequentially extrudes corresponding preset offset relative to the bottom initial object model along the normal direction of the bottom initial object model; and the merging module is used for merging the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect.
In a fourth aspect, an embodiment provides a device for implementing a fluff effect, the device including: the second acquisition module is used for acquiring a target object model of the fluff effect to be simulated, wherein the target object model comprises a plurality of layers of initial object models, and the initial object models except the initial object model positioned at the bottom layer in the plurality of layers of initial object models are sequentially extruded to corresponding preset offset relative to the initial object model at the bottom layer along the normal direction of the initial object model at the bottom layer; the third acquisition module is used for acquiring a fluff material file corresponding to the target object model; and the rendering module is used for rendering the target object model according to the fluff material file.
In a fifth aspect, an embodiment provides a terminal, including a memory, and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps of the method according to any one of the foregoing embodiments when the processor executes the computer program.
In a sixth aspect, embodiments provide a computer readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any of the preceding embodiments.
The embodiment of the application provides a method, a device and a terminal for realizing a fluff effect, wherein the method obtains initial object models of the fluff effect to be simulated, and copies the initial object models to obtain a preset number of initial object models in response to a model copy instruction; each copied initial object model sequentially extrudes corresponding preset offset along the normal direction of the bottom initial object model relative to the bottom initial object model; and combining the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect. The method can directly obtain the target object model for simulating the fluff effect, does not need to carry out multi-channel rendering, can simulate the fluff with multi-layer effect only by one rendering, and relieves the problems of large performance consumption and larger limitation of the method for simulating the fluff effect by the layer rendering in the prior art, thereby realizing the technical effects of reducing the rendering times and lowering the performance consumption and configuration requirements.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for implementing a fluff effect according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an effect of simulating the number of pile layers according to an embodiment of the present invention;
FIG. 3 is a flow chart of another implementation method of fluff effect according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a device for implementing a fluff effect according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another device for implementing a fluff effect according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
The terms "comprising" and "having" and any variations thereof, as used in the embodiments of the present application, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
With the rapid development of mobile devices such as mobile phones and tablets, the requirements of players on the fineness degree of game scenes are higher and higher. In the prior art, some effects of fine hair, nap and the like are generally achieved by using a layer rendering technology, the hair length is rendered according to layers, each Pass renders one layer by using a Multi Pass function, the vertex position is extruded out of the model surface by using a normal line, and the rendering effect is better as the number of layers used and Pass is more. However, as the number of layers of the fluff is larger, each Pass is rendered once, so that the rendering times are larger, the performance consumption is high, the scheme is very dependent on the Multi Pass function, and if the engine does not provide the Multi Pass function, the fluff effect cannot be realized by the scheme, namely, the problem that the performance consumption is high and the limitation is high in the method for simulating the fluff effect in the layer rendering in the prior art exists.
Based on the above, the embodiment of the invention provides a method, a device and a terminal for realizing a fluff effect, so as to solve the problems of large performance consumption and large limitation in a method for rendering a simulated fluff effect in the middle layer in the prior art.
For the convenience of understanding the present embodiment, first, a detailed description will be given of a method for implementing a pile effect disclosed in the present embodiment, referring to a flow chart of a method for implementing a pile effect shown in fig. 1, the method may be executed by a terminal, and mainly includes the following steps S110 to S130:
s110, acquiring an initial object model of the fluff effect to be simulated.
Wherein the initial object model comprises a first set of map coordinates (UV coordinates) for determining the position of the map. The method may be applied to a development terminal that may be used to run three-dimensional animation software, such as 3D Studio Max (3 DMAX), or the like. Specifically, the terminal may download the application package of the client of the software from the server or the server corresponding to the application store. The software client can be installed and operated by installing the application program installation package of the client on the terminal, and when the client is operated, the user interface can be displayed and the user operation can be responded based on the resources in the application program installation package, so that the interaction with the user is realized.
In addition, the software-generated model resources may be applied to the gaming terminal. Specifically, when the game terminal runs the game client, the display of the user interface and the response to the operation of the user can be performed based on the game resource, so as to realize the interaction with the user, wherein the game resource can comprise a model of an object in the game and the like.
In a game, a game scene includes a plurality of objects, which may also be referred to as virtual objects. Any object may correspond to a model, which may be a model generated by three-dimensional animation software. The game terminal may render to obtain a game screen including the object based on the model of the object.
The initial object model may correspond to a plurality of surfaces, and a surface of the plurality of surfaces that needs to generate fluff may be referred to as an initial surface. In other words, the initial object model includes a region of the nap to be drawn, which corresponds to an initial surface.
In addition, a development tool may be utilized to model the pile area in three dimensions and add a first set of map coordinates to the model. The map coordinates may be used to locate the map location information for each point where the initial object model and the three-dimensional model are interrelated.
S120, in response to the model replication instructions, the initial object models are replicated to obtain a preset number of initial object models, and each replicated initial object model sequentially extrudes corresponding preset offset relative to the bottom initial object model along the normal direction of the bottom initial object model.
In the embodiment of the application, when the initial object model is copied, the copying can be performed with a specified offset along the normal direction of the bottom initial object model. The replication may also be referred to as a copy, which may refer to the use of extrusion commands, extruder, in the three-dimensional animation rendering software to obtain the number of layers of simulated pile. The copy may be regarded as a hierarchical modeling, e.g. in three-dimensional animation rendering software (e.g. 3 DMax), copying the pre-built fluff region model in situ.
As an example, it may be shown in part (a) of fig. 2. And copying an initial surface six times, and shifting to obtain six copy surfaces, wherein the initial surface and the copy surfaces are used for realizing the fluff effect. The preset number is related to the fluff effect. The simplest fluff effect is generally three layers, and the effect is better as the number of layers is larger, namely the simulated fluff effect is more lifelike, but the processing is more complex, and the performance consumption is larger, so that the fluff effect can be generally set after being combined with multiparty conditions such as hardware and the like. For example, the current performance of the terminal may be obtained, which may refer to the capabilities of a graphics processor, an application processor, content, etc., and the current load situation, and the preset number matched with the current performance of the terminal is determined based on the current performance of the terminal.
The initial offset may be a predetermined extrusion amount, for example, the extrusion amount corresponding to the first replication surface (i.e., the first layer) is 0.005, and when extrusion is sequentially performed on each replication surface, the extrusion amount corresponding to the next replication surface (the second layer) is 0.010, and the extrusion amount corresponding to the next replication surface (the third layer) is 0.015.
The preset offset for each initial object model extrusion is different, and the preset offset for each initial object model extrusion may be a fixed multiple, for example: the distance between the first layer and the bottom layer is 0.005 by default, the distance between the second layer and the bottom layer is 0.01 by default, and the distance between the third layer and the bottom layer is 0.015 by default; or may not be a fixed multiple, such as: the first layer is a default of 0.005 from the bottom layer, the second layer is a default of 0.008 from the bottom layer, and the third layer is a default of 0.012 from the bottom layer.
S130, combining the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect.
The method can be executed by a development terminal of the three-dimensional animation production software to generate a target object model, and the model can be loaded in a game client of the game terminal and rendered to obtain a picture of the target object in a game scene.
According to the method, the initial object models with the fluff effect to be simulated are obtained, and the initial object models with the preset number are obtained by copying the initial object models in response to the model copying instruction; each copied initial object model sequentially extrudes corresponding preset offset along the normal direction of the bottom initial object model relative to the bottom initial object model; and combining the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect. The method can directly obtain the target object model for simulating the fluff effect, does not need to carry out multi-channel rendering, can simulate the fluff with multi-layer effect only by one rendering, and relieves the problems of large performance consumption and larger limitation of the method for simulating the fluff effect by the layer rendering in the prior art, thereby realizing the technical effects of reducing the rendering times and lowering the performance consumption and configuration requirements.
In some embodiments, in order to better control the replicated multi-layer model, a fluff effect can be better realized during rendering, and a plurality of initial object models can be further sequenced so as to realize a better rendering effect. Based on this, as an example, the method further comprises the steps of:
And 1), responding to the vertex index setting instruction, and sequentially carrying out index marking on the vertices of each initial object model to obtain index numbers of the vertices of each initial object model.
Further, in some embodiments, the step 1) includes:
step 1.1), obtaining a second set of map coordinates of each initial object model;
wherein the initial object model comprises a first set of map coordinates, the first set of map coordinates being used to determine the position of the map.
And 1.2), respectively carrying out index marking on the vertexes of the corresponding object models according to the obtained second set of mapping coordinates of each initial object model to obtain index numbers of the vertexes of each initial object model.
Further, in some embodiments, the step 1) further includes:
step 1.3), sequentially storing index numbers of the vertexes of each initial object model in an index cache region.
This step is performed by a development tool, which may be a triangle ordering tool or the like, which may also be used as a plug-in to the modeling software 3 DMAX. The triangle sequencing tool responds to vertex index setting instructions of developers, sequentially indexes and marks the vertex coordinates of each initial object model according to a second set of mapping coordinates of the initial object model to obtain index numbers of vertices of each initial object model, wherein the index numbers are used for representing the rendering sequence of each initial object model, namely, the target object model is numbered according to U values of UV coordinates. The larger the offset of the copied several initial object models relative to the underlying initial object model, the later the ordering thereof.
Wherein the vertex coordinate values of each initial object model may be expressed as uv2.X; further, in uv2.X, the coordinate information of each initial object model may be edited, as shown in part (b) of fig. 2, six points are sequentially increased by 0.05 from inside to outside to display a hierarchical label, so that subsequent sorting is facilitated, and further rendering of the fluff level from inside to outside is realized, thereby solving the problem of transparent sorting. For example, based on a triangle ordering tool, ordering positioning can be implemented according to the position of the coordinate value of the vertex in uv2.X of the initial object model in the index buffer. The smaller the coordinate value of a general vertex, the more forward the corresponding vertex is in the index buffer.
In some embodiments, the method for implementing the fluff effect further includes the following steps:
and 2) setting values of a second set of map coordinates of each initial object model in response to a second set of map coordinate setting operation on each initial object model, wherein the second set of map coordinates is used for storing the increment for the preset offset.
In some embodiments, step 2) above comprises the steps of:
step 2.1), setting values of a second set of map coordinates of each initial object model according to preset multiples of the preset offset corresponding to each initial object model.
Wherein the three-dimensional animation software generates a second set of map coordinates for each initial object model that can hold an increment of each initial object model to a preset offset. The preset offset may be a predetermined amount of extrusion.
The increment for each preset offset may be the same or different, and the following increments are (0.05 x times, 0.1 x times, 0.15 x times): the distance between the first layer and the bottom layer is 0.005+0.005 times by default, the distance between the second layer and the bottom layer is 0.01+0.01 times by default, and the distance between the third layer and the bottom layer is 0.015+0.015 times by default; in addition, the increment can be zero, and the increment can be selected according to the operation convenience of the artistic staff and the required effect.
In some embodiments, the method for implementing the fluff effect further includes the following steps:
and 3) responding to the fluff material setting operation, and setting material parameters corresponding to each initial object model.
Wherein the texture parameters include at least one of: transparency of the initial object model, hair concentration, hair permeability, hair color intensity, target offset of the initial object model relative to the underlying initial object model. For example, one manner of setting the texture parameters in modeling software is shown in FIG. 2 (c).
The transparency of the initial object model may be a preset initial transparency, the transparency of the copied initial object model is determined based on the initial transparency and a transparency coefficient of the copied initial object model, the transparency coefficient of the copied initial object model is determined according to the preset coefficient and a copied hierarchy, and the further the copied initial object model is from the initial object model, the higher the hierarchy is.
The hair density, i.e. the degree of concentration of fluff, can be expressed by the amount of fluff per unit area/volume; the density of the initial object model is preset initial density, the density of the replication object model is determined based on the initial density and the density coefficient of the replication object model, and the density coefficient of the replication object model is determined according to the preset coefficient and the replication level.
In some embodiments, when the material parameter is transparency of the initial object model, the step 3) sets the material parameter corresponding to each initial object model, including:
step 3.1), determining a transparency value of a corresponding initial object model according to a preset fluff noise wave map and a second set of map coordinates of each initial object model;
step 3.2), setting the transparency of each initial object model according to the obtained transparency values.
The preset fluff noise wave map can be used for determining the types of the hairs and can be set through modeling software. Transparency can be represented by the value of the Alpha channel, which can be used to store a value, which manifests itself as "transparency".
In some embodiments, when the material parameter is a target offset of the initial object model relative to the underlying initial object model, the step 3) sets the material parameter corresponding to each initial object model, including:
step 3.3), obtaining the value of the second set of map coordinates of each initial object model and the corresponding preset offset;
and 3.4) obtaining the target offset of each initial object model relative to the bottom initial object model according to the value of the second set of mapping coordinates corresponding to each initial object model and the preset offset.
According to the method for realizing the fluff effect, provided by the embodiment of the invention, the initial object models of the fluff effect to be simulated are obtained, and are copied to obtain the initial object models with the preset number; each copied initial object model sequentially extrudes corresponding preset offset along the normal direction of the bottom initial object model relative to the bottom initial object model; and combining the initial object models with the preset number into one model to obtain a target object model for simulating the fluff effect.
By the method provided by the embodiment, the constructed target object model can be directly loaded on the game client, only one rendering is needed, the function of Multi Pass is not relied on, and the simulation of the layer number can be completed in modeling software; on the other hand, the performance consumption can be reduced, and the length of the hair can be directly simulated in the 3D modeling software, so that the method is more convenient and flexible; and the method provided by the embodiment is compatible in the old version engine. In summary, the fluff drawing method provided by the embodiment alleviates the problems of large performance consumption and large limitation in the method for rendering the simulated fluff effect in the middle layer in the prior art, thereby realizing the technical effects of reducing the rendering times and lowering the performance consumption and configuration requirements.
The embodiment of the present application also provides another implementation method of the fluff effect, referring to the flowchart shown in fig. 3, the method may be executed by a terminal, and mainly includes the following steps S310 to S330:
s310, a target object model of the fluff effect to be simulated is obtained.
The target object model comprises a plurality of layers of initial object models, wherein initial object models except for the initial object model positioned at the bottom layer in the plurality of layers of initial object models sequentially extrude corresponding preset offset relative to the initial object model at the bottom layer along the normal direction of the initial object model at the bottom layer. Specifically, the bottom layer also corresponds to the inner layer.
The method can be applied to a game client, and the game client can be installed on the terminal. Specifically, the terminal may download an application package of a client of the game from a server of the game or a server corresponding to an application store. The application package of the client may include various game resources, the game client may be installed and operated by installing the application installation package of the client in the terminal, and when the client is operated, the user interface may be displayed and the user operation may be responded based on the game resources in the application installation package, so as to implement interaction with the user, where the game resources may include models of objects in the game, and so on.
In addition, the method can also be applied to cloud games or web games and the like.
For web games, the client of the game may refer to a browser through which the terminal requests a game resource from the service, so as to run the game resource through the browser and interact with the user.
For a cloud game, the terminal may refer to a cloud server of the cloud game, and the cloud server may send the rendered game image to the user equipment, so that the user equipment displays the rendered game image.
In a game scene, a target object with a nap effect may be included, where the target object may correspond to resources such as an initial model, a texture map, configuration information, etc., for example, the target object may be an avatar, and the avatar may include a virtual garment, a virtual scene, a virtual character, etc., where one or more of the virtual garment, the virtual scene, and the virtual character may each have a nap region to be drawn, and the nap region to be drawn may be all or part of a region to which the virtual garment, the virtual scene, or the virtual character corresponds. For example, the virtual garment may be made up of a plurality of components, which may include sleeves and a collar, which may be the region of fluff to be drawn.
S320, a fluff material file corresponding to the target object model is obtained.
As an example, the target offset of each initial object model in the pile material file relative to the underlying initial object model is obtained from the values of the second set of map coordinates of each initial object model and the corresponding preset offset.
As an example, the transparency of each initial object model in the pile material file is obtained from a preset pile noise map and a second set of map coordinates of each initial object model.
The pile material file may include configuration parameters including a preset layer number, a preset coefficient, a pile Mao Zaobo chart, and the like. In other words, the initial model, the preset number of layers, the preset coefficient, and the velvet Mao Zaobo map are pre-arranged on the terminal. In addition, the preset layer number, the preset coefficient and the fluff noise diagram can be determined according to the configuration of a user. For example, the terminal may determine the number of corresponding layers, coefficients, etc. based on the selected pile level in response to a user selection of the pile effect level. Or, the terminal may determine the preset layer number or the preset coefficient in response to the user operation on the configuration of the specific parameter values such as the layer number or the coefficient. For another example, the terminal may determine a fluff noise map corresponding to the user-configured fluff style or type in response to a user-configured configuration operation of the fluff style or type, etc. Wherein the fluff noise wave map is used for indicating the types of fluff. For example, the fluff noise wave map may indicate fluff thickness and density, among others.
And S330, rendering the target object model according to the fluff material file.
After the configuration parameters corresponding to the target object model are determined, the corresponding rendering parameters can be determined by combining the fluff noise wave map, and the rendering parameters and the corresponding maps are used for rendering, so that the fluff effect can be realized.
In some embodiments, the step S330 includes:
step 4), obtaining index numbers of vertexes of each initial object model;
and 5) rendering each initial object model in the target object model according to the sequence of index numbers of the vertexes of each initial object model and the fluff material files in turn.
In some embodiments, step 4) above comprises:
step 4.1), obtaining the index number of the vertex of each initial object model from an index buffer area, wherein the index buffer area is used for storing the second set of mapping coordinates of each initial object model.
In addition, the configuration parameters corresponding to the target object model may further include scaling parameters, environment parameters, and the like. The environmental parameters can refer to environmental shielding parameters, environmental absorption parameters or environmental light absorption parameters and the like, are used for describing the effect of shielding surrounding diffuse reflection light rays when an object and an object intersect or are close to each other, can solve or improve the problems of light leakage, drifting, unrealistic shadow and the like, solve or improve the problem of unclear performance of gaps, folds, corners, corner lines, fine objects and the like in a scene, comprehensively improve details, particularly dark shadows, enhance layering and realism of a space, enhance and improve bright-dark contrast of a picture and enhance artistic quality of the picture. Ambient occlusion (Ambient Occlusion, AO) is typically used to indicate ambient light occlusion, and may be used to adjust the layering, realism of a space.
According to the method for realizing the fluff effect, provided by the embodiment of the invention, the initial object model of the fluff effect to be simulated is obtained, the fluff material file corresponding to the target object model is obtained, and then the target object model is rendered according to the fluff material file, so that the fluff with the multi-layer effect can be simulated only by one rendering, and the performance consumption is reduced. The method relieves the problems of large performance consumption and larger limitation of the method for rendering the simulated fluff effect in the middle layer in the prior art, thereby realizing the technical effects of reducing rendering times and lowering performance consumption and configuration requirements.
Fig. 4 is a schematic structural diagram of a device for implementing a fluff effect according to an embodiment of the present invention. The device is applied to a terminal, and comprises:
a first obtaining module 410, configured to obtain an initial object model of a fluff effect to be simulated;
the replication module 420 is configured to replicate the initial object models in response to a model replication instruction, so as to obtain a preset number of initial object models, and each of the replicated initial object models sequentially extrudes a corresponding preset offset along a normal direction of the bottom initial object model relative to the bottom initial object model;
And the merging module 430 is configured to merge a preset number of initial object models into one model, so as to obtain a target object model for simulating the fluff effect.
In some embodiments, the apparatus further comprises: the marking module is used for responding to the vertex index setting instruction, sequentially carrying out index marking on the vertices of each initial object model to obtain index numbers of the vertices of each initial object model, wherein the index numbers are used for representing the rendering sequence of each initial object model.
In some embodiments, the marking module includes an acquiring unit and a numbering unit, where the acquiring unit is configured to acquire a second set of map coordinates of each initial object model; the initial object model comprises a first set of mapping coordinates, wherein the first set of mapping coordinates are used for determining the position of a mapping; the numbering unit is used for respectively indexing and marking the vertexes of the corresponding initial object models according to the obtained second set of mapping coordinates of each initial object model to obtain the index numbers of the vertexes of each initial object model.
In some embodiments, the marking module is further configured to store index numbers of vertices of each initial object model in the index buffer in sequence.
In some embodiments, the apparatus further comprises a coordinate setting module for setting values of a second set of map coordinates for each initial object model in response to a second set of map coordinates setting operation for each initial object model, the second set of map coordinates for maintaining an increment for a preset offset.
In some embodiments, the coordinate setting module is configured to set a value of the second set of map coordinates of each initial object model according to a preset multiple of the preset offset corresponding to each initial object model.
In some embodiments, the apparatus further includes a parameter setting module, configured to set, in response to the fluff material setting operation, a material parameter corresponding to each initial object model, where the material parameter includes at least one of: transparency of the initial object model, hair concentration, hair permeability, hair color intensity, target offset of the initial object model relative to the underlying initial object model.
In some embodiments, when the material parameter is transparency of the initial object model, the parameter setting module is configured to determine a transparency value of the corresponding initial object model according to a preset fluff noise wave map and a second set of map coordinates of each initial object model; the transparency of each initial object model is set according to the obtained transparency value.
In some embodiments, when the material parameter is a target offset of the initial object model relative to the underlying initial object model, the parameter setting module is configured to obtain a value of a second set of map coordinates of each initial object model and a corresponding preset offset; and obtaining the target offset of each initial object model relative to the bottom initial object model according to the value of the second set of map coordinates corresponding to each initial object model and the preset offset.
Fig. 5 is a schematic structural diagram of another device for achieving a fluff effect according to an embodiment of the present invention. The device is applied to a terminal, and comprises:
the second obtaining module 510 is configured to obtain a target object model of a fluff effect to be simulated, where the target object model includes multiple layers of initial object models, and initial object models in the multiple layers of initial object models except for an initial object model located at a bottom layer sequentially extrude corresponding preset offsets along a normal direction of the initial object model of the bottom layer relative to the initial object model of the bottom layer;
a third obtaining module 520, configured to obtain a fluff material file corresponding to the target object model;
the rendering module 530 is configured to render the target object model according to the fluff material file.
In some embodiments, the rendering module includes a number acquisition unit for acquiring an index number of a vertex of each initial object model, and a rendering unit; the rendering unit is used for sequentially rendering each initial object model in the target object model according to the sequence of index numbers of the vertexes of each initial object model and the fluff material files.
In some embodiments, the rendering unit is further configured to obtain an index number of a vertex of each initial object model; and rendering each initial object model in the target object model according to the sequence of index numbers of the vertexes of each initial object model and the fluff material files in turn.
In some embodiments, the rendering unit is further configured to obtain an index number of the vertex of each initial object model from an index buffer, where the index buffer is configured to store a second set of map coordinates of each initial object model.
In some embodiments, the target offset of each initial object model in the pile material file relative to the underlying initial object model is obtained from the values of the second set of map coordinates of each initial object model and the corresponding preset offset.
In some embodiments, the transparency of each initial object model in the fluff material file is obtained from a preset fluff noise wave map and a second set of map coordinates for each initial object model.
The implementation device of the fluff effect provided by the embodiment of the application may be specific hardware on the device or software or firmware installed on the device. The device provided in the embodiments of the present application has the same implementation principle and technical effects as those of the foregoing method embodiments, and for a brief description, reference may be made to corresponding matters in the foregoing method embodiments where the device embodiment section is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein. The device for realizing the fluff effect provided by the embodiment of the application has the same technical characteristics as the method for realizing the fluff effect provided by the embodiment, so that the same technical problems can be solved, and the same technical effect is achieved.
The embodiment of the application also provides a terminal, which specifically comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the embodiments described above.
As an example, as shown in fig. 6, a computer device 400 provided in an embodiment of the present application includes: the device comprises a processor 401, a memory 402 and a bus, wherein the memory 402 stores machine readable instructions executable by the processor 401, when the computer device is running, the processor 401 communicates with the memory 402 through the bus, and the processor 401 executes the machine readable instructions to execute the steps of the implementation method of the fluff effect.
Specifically, the above-mentioned memory 402 and the processor 401 can be general-purpose memories and processors, and are not particularly limited herein, and the implementation method of the above-mentioned fluff effect can be performed when the processor 401 runs a computer program stored in the memory 402.
Wherein, the above-mentioned computer apparatus 400 may be used to perform the implementation method of the fluff effect shown in fig. 1, and at this time, the computer apparatus 400 may be a development terminal; the computer device 400 described above may also be used to perform the method shown in fig. 3, in which case the computer device 400 may be a gaming terminal.
Corresponding to the implementation method of the fluff effect, the embodiment of the application further provides a computer readable storage medium, wherein the computer readable storage medium stores machine executable instructions, and the computer executable instructions cause a processor to execute the steps of the implementation method of the fluff effect when the computer executable instructions are called and executed by the processor.
The implementation device of the fluff effect provided by the embodiment of the application may be specific hardware on the device or software or firmware installed on the device. The device provided in the embodiments of the present application has the same implementation principle and technical effects as those of the foregoing method embodiments, and for a brief description, reference may be made to corresponding matters in the foregoing method embodiments where the device embodiment section is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the mobile control method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the scope of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (15)

1. A method of achieving a fluff effect, the method comprising:
acquiring an initial object model of a fluff effect to be simulated;
copying the initial object models in response to a model copying instruction to obtain a preset number of initial object models, and sequentially extruding corresponding preset offset amounts of each copied initial object model relative to the bottom initial object model along the normal direction of the bottom initial object model;
Combining the initial object models with the preset number into a model to obtain a target object model for simulating the fluff effect;
sequentially carrying out index marking on the vertexes of each initial object model in response to a vertex index setting instruction to obtain index numbers of the vertexes of each initial object model, wherein the index numbers are used for representing the rendering sequence of each initial object model;
and sequentially storing index numbers of the vertexes of each initial object model in an index cache region.
2. The implementation method according to claim 1, wherein sequentially indexing the vertices of each initial object model to obtain an index number of the vertex of each initial object model includes:
obtaining a second set of map coordinates of each initial object model;
and respectively indexing and marking the vertexes of the corresponding initial object models according to the obtained second set of map coordinates of each initial object model to obtain index numbers of the vertexes of each initial object model.
3. The implementation method according to claim 1, characterized in that the method further comprises:
and setting values of a second set of map coordinates of each initial object model in response to a second set of map coordinate setting operation on each initial object model, wherein the second set of map coordinates is used for storing the increment for the preset offset.
4. A method according to claim 3, wherein said setting values of a second set of map coordinates for each of said initial object models comprises:
and setting the value of the second set of map coordinates of each initial object model according to the preset multiple of the preset offset corresponding to each initial object model.
5. The implementation method according to claim 1, characterized in that the method further comprises:
responding to a fluff material setting operation, setting material parameters corresponding to each initial object model, wherein the material parameters comprise at least one of the following: transparency of the initial object model, hair concentration, hair permeability, hair color intensity, target offset of the initial object model relative to the underlying initial object model.
6. The implementation method according to claim 5, wherein when the material parameter is transparency of the initial object model, the setting the material parameter corresponding to each initial object model includes:
determining a transparency value of the corresponding initial object model according to a preset fluff noise wave map and a second set of map coordinates of each initial object model;
And setting the transparency of each initial object model according to the obtained transparency value.
7. The implementation method according to claim 5, wherein when the material parameter is a target offset of the initial object model relative to the underlying initial object model, the setting the material parameter corresponding to each initial object model includes:
acquiring a value of a second set of map coordinates of each initial object model and a corresponding preset offset;
and obtaining the target offset of each initial object model relative to the bottom initial object model according to the value of the second set of map coordinates corresponding to each initial object model and the preset offset.
8. A method of achieving a fluff effect, the method comprising:
acquiring a target object model of a fluff effect to be simulated, wherein the target object model comprises a plurality of layers of initial object models, wherein initial object models except for an initial object model positioned at a bottom layer in the plurality of layers of initial object models are sequentially extruded by corresponding preset offset relative to the initial object model at the bottom layer along the normal direction of the initial object model at the bottom layer;
Acquiring a fluff material file corresponding to the target object model; the fluff material file is used for representing configuration parameters of fluff effects;
obtaining index numbers of the vertexes of each initial object model from an index cache area;
and rendering each initial object model in the target object model according to the sequence of index numbers of vertexes of each initial object model in turn according to the fluff material files.
9. The implementation of claim 8, wherein the index buffer is configured to store a second set of map coordinates for each of the initial object models.
10. The method according to claim 8, wherein the target offset of each initial object model in the pile material file relative to the underlying initial object model is obtained according to the value of the second set of map coordinates of each initial object model and the corresponding preset offset.
11. The method according to claim 8, wherein the transparency of each of the initial object models in the pile material file is obtained according to a preset pile noise wave map and a second set of map coordinates of each of the initial object models.
12. A fluff effect achieving device, characterized in that the device comprises:
the first acquisition module is used for acquiring an initial object model of the fluff effect to be simulated;
the copying module is used for responding to the model copying instruction, copying the initial object models to obtain a preset number of initial object models, and sequentially extruding corresponding preset offset values of each copied initial object model relative to the bottom initial object model along the normal direction of the bottom initial object model;
the merging module is used for merging the initial object models with the preset number into a model to obtain a target object model for simulating the fluff effect;
the marking module is used for responding to the vertex index setting instruction, sequentially marking the vertex of each initial object model by indexes to obtain the index number of the vertex of each initial object model, wherein the index number is used for representing the rendering sequence of each initial object model;
the marking module is further used for sequentially storing index numbers of the vertexes of each initial object model in an index cache area.
13. A fluff effect achieving device, characterized in that the device comprises:
The second acquisition module is used for acquiring a target object model of the fluff effect to be simulated, wherein the target object model comprises a plurality of layers of initial object models, and initial object models except for the initial object model positioned at the bottom layer in the plurality of layers of initial object models are sequentially extruded to corresponding preset offset relative to the initial object model at the bottom layer along the normal direction of the initial object model at the bottom layer;
the third acquisition module is used for acquiring a fluff material file corresponding to the target object model;
the rendering module is used for acquiring index numbers of the vertexes of each initial object model from the index cache area; and rendering each initial object model in the target object model according to the sequence of index numbers of vertexes of each initial object model in turn according to the fluff material files.
14. A terminal comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the method of any of the preceding claims 1 to 11.
15. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any one of claims 1 to 11.
CN202010257348.4A 2020-04-02 2020-04-02 Method, device and terminal for realizing fluff effect Active CN111462313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010257348.4A CN111462313B (en) 2020-04-02 2020-04-02 Method, device and terminal for realizing fluff effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010257348.4A CN111462313B (en) 2020-04-02 2020-04-02 Method, device and terminal for realizing fluff effect

Publications (2)

Publication Number Publication Date
CN111462313A CN111462313A (en) 2020-07-28
CN111462313B true CN111462313B (en) 2024-03-01

Family

ID=71685845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010257348.4A Active CN111462313B (en) 2020-04-02 2020-04-02 Method, device and terminal for realizing fluff effect

Country Status (1)

Country Link
CN (1) CN111462313B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396680B (en) * 2020-11-27 2022-11-25 完美世界(北京)软件科技发展有限公司 Method and device for making hair flow diagram, storage medium and computer equipment
CN112419465B (en) * 2020-12-09 2024-05-28 网易(杭州)网络有限公司 Virtual model rendering method and device
CN113409465B (en) * 2021-06-23 2023-05-12 网易(杭州)网络有限公司 Hair model generation method and device, storage medium and electronic equipment
CN113888688B (en) * 2021-08-20 2023-01-03 完美世界互娱(北京)科技有限公司 Hair rendering method, device and storage medium
CN113947653B (en) * 2021-09-27 2023-04-07 四川大学 Simulation method of real texture hair
CN116109744A (en) * 2021-11-10 2023-05-12 北京字节跳动网络技术有限公司 A fluff rendering method, device, equipment and medium
CN116115995A (en) * 2021-11-15 2023-05-16 完美世界(北京)软件科技发展有限公司 Image rendering processing method and device and electronic equipment
CN114119821A (en) * 2021-11-18 2022-03-01 洪恩完美(北京)教育科技发展有限公司 Hair rendering method, device and device for virtual object
CN116402980A (en) * 2021-12-28 2023-07-07 北京字跳网络技术有限公司 A method, device, equipment, medium and product for generating virtual fluff
CN116416363A (en) * 2021-12-29 2023-07-11 北京字跳网络技术有限公司 A fluff rendering method, device, equipment and medium
CN114842119A (en) * 2022-05-06 2022-08-02 网易(杭州)网络有限公司 Hair model generation method, device, computer equipment and storage medium
CN114693856B (en) * 2022-05-30 2022-09-09 腾讯科技(深圳)有限公司 Object generation method and device, computer equipment and storage medium
CN116883567B (en) * 2023-07-07 2024-08-16 上海散爆信息技术有限公司 Fluff rendering method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8407619B2 (en) * 2008-07-30 2013-03-26 Autodesk, Inc. Method and apparatus for selecting and highlighting objects in a client browser

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN109816762A (en) * 2019-01-30 2019-05-28 网易(杭州)网络有限公司 A kind of image rendering method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111462313A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462313B (en) Method, device and terminal for realizing fluff effect
US12347016B2 (en) Image rendering method and apparatus, device, medium, and computer program product
CN112933597A (en) Image processing method, image processing device, computer equipment and storage medium
CN111476877B (en) Shadow rendering method and device, electronic equipment and storage medium
US9176662B2 (en) Systems and methods for simulating the effects of liquids on a camera lens
CN105528207A (en) Virtual reality system, and method and apparatus for displaying Android application images therein
CN109712226A (en) The see-through model rendering method and device of virtual reality
US12118663B2 (en) Modifying voxel resolutions within three-dimensional representations
CN106447756B (en) Method and system for generating user-customized computer-generated animations
WO2023098358A1 (en) Model rendering method and apparatus, computer device, and storage medium
US11625900B2 (en) Broker for instancing
US20210241540A1 (en) Applying Non-Destructive Edits To Nested Instances For Efficient Rendering
CN116193050B (en) Image processing method, device, equipment and storage medium
Valenza Blender Cycles: Materials and Textures Cookbook
CN114367105B (en) Model coloring method, device, apparatus, medium, and program product
CN116485981A (en) Three-dimensional model mapping method, device, equipment and storage medium
CN115131480A (en) Method and device for manufacturing special effect of horse race lamp and electronic equipment
Papaioannou et al. Enhancing Virtual Reality Walkthroughs of Archaeological Sites.
CN114764851A (en) Three-dimensional image editing method, system, storage medium and terminal equipment
CN112102450A (en) WebGL three-dimensional map-based general method for special effect of marquee
US12169899B2 (en) Cloth modeling using quad render meshes
CN114004920B (en) A method and device for adding flame special effects to a picture
CN113487708B (en) Flow animation implementation method based on graphics, storage medium and terminal equipment
CN119107399B (en) Shadow rendering method and device based on 2D image
Souza An analysis of real-time ray tracing techniques using the Vulkan® explicit API

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant