[go: up one dir, main page]

CN117437362A - A three-dimensional animation model generation method and system - Google Patents

A three-dimensional animation model generation method and system Download PDF

Info

Publication number
CN117437362A
CN117437362A CN202311538906.4A CN202311538906A CN117437362A CN 117437362 A CN117437362 A CN 117437362A CN 202311538906 A CN202311538906 A CN 202311538906A CN 117437362 A CN117437362 A CN 117437362A
Authority
CN
China
Prior art keywords
vertex
target
dimensional
vector
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311538906.4A
Other languages
Chinese (zh)
Other versions
CN117437362B (en
Inventor
王冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kunteng Animation Co ltd
Original Assignee
Shenzhen Kunteng Animation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kunteng Animation Co ltd filed Critical Shenzhen Kunteng Animation Co ltd
Priority to CN202311538906.4A priority Critical patent/CN117437362B/en
Publication of CN117437362A publication Critical patent/CN117437362A/en
Application granted granted Critical
Publication of CN117437362B publication Critical patent/CN117437362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of three-dimensional model generation, in particular to a three-dimensional animation model generation method and system. The method comprises the following steps: generating a three-dimensional cartoon model, acquiring a sitting mark of a vertex of the three-dimensional cartoon model as a target vertex, and acquiring a first characteristic field and a second characteristic field of the target vertex; obtaining a target differential vector of the target vertex according to the difference between the target vertex and the vertex in the first characteristic field; acquiring a three-dimensional vertex density index of the target vertex according to the distance between the target vertex and the vertex in the second characteristic field; obtaining normal vectors of the vertexes, presetting three-dimensional unit vectors, obtaining offset angles of the vertexes according to the normal vectors of the vertexes and the three-dimensional unit vectors, and obtaining concave-convex parameters of the target vertexes based on the offset angles; obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex density index and the concave-convex parameter; and updating the vertex coordinates according to the offset distance to generate the three-dimensional cartoon model. The invention generates the three-dimensional cartoon model with higher precision.

Description

Three-dimensional animation model generation method and system
Technical Field
The invention relates to the technical field of three-dimensional model generation, in particular to a three-dimensional animation model generation method and system.
Background
With the development of artificial intelligence technology, the related technology of artificial intelligence is continuously used for energizing and enhancing the efficiency of the traditional industry. The method has also greatly developed in the direction of the generation type artificial intelligence, mainly generates two-dimensional pictures in the conventional generation type models, and now, a mature technology is also provided for generating a three-dimensional model by using deep learning. A three-dimensional model generally refers to a mathematical representation of a physical object in space. The three-dimensional model has the expression forms of polygonal grids, curved surfaces, voxels, point clouds and the like in a computer. The polygonal grid representation method is the most commonly used three-dimensional model representation method, is easy to edit, and has better performance in real-time image processing. Again with most used in triangular meshes (also called triangle primitives). The triangular mesh consists of coordinates and indexes of vertices. Vertex coordinates are coordinates of three vertices of a triangle, and vertex indexes are connection sequences among the vertices. Some finer three-dimensional models also store model information such as texture coordinates, vertex colors, normal vectors, and the like. In the generated cartoon model, most of the curved surfaces of the model are smoother curved surfaces, and the texture details of the characters are absent. The detail content of the model is generally enriched by using a displacement map and a normal map offset, the displacement map needs to be prepared to identify the displacement size of each vertex, and the position of the vertex of the model is greatly changed by the displacement map. And the normal line offset is achieved by fine adjustment of the vertex position, so that the surface normal line of the triangular mesh is changed to influence illumination and shadow, and the three-dimensional model is enabled to be more detailed and richer.
In the normal line deviation process, because of the characteristic of uneven triangular mesh distribution of the three-dimensional animation model, the direct use of normal line deviation to optimize model details can cause geometric distortion of partial vertexes, namely unreasonable vertex positions appear in detail parts of the model.
Disclosure of Invention
In order to solve the technical problem of model detail distortion, the invention provides a three-dimensional animation model generation method and a system, and the adopted technical scheme is as follows:
in a first aspect, the present invention provides a method for generating a three-dimensional animation model, the method comprising the steps of:
and generating a three-dimensional cartoon model, and obtaining the coordinates of each vertex of the three-dimensional cartoon model.
Marking any vertex of the three-dimensional cartoon model as a target vertex, and acquiring a first characteristic field and a second characteristic field of the target vertex; obtaining a target differential vector of the target vertex according to the difference vector of the target vertex and each vertex in the first characteristic field and the number of the vertexes in the first characteristic field;
acquiring a three-dimensional vertex density index of the target vertex according to the Euclidean distance between the vertex in the second characteristic field of the target vertex and the target vertex, the preset adjacent distance between the vertex in the second characteristic field and the number of the vertex in the second characteristic field;
obtaining normal vectors of all vertexes, presetting three-dimensional unit vectors, obtaining offset angles of all vertexes according to the normal vectors of all vertexes and the three-dimensional unit vectors, and obtaining concave-convex parameters of the target vertexes according to the offset angles of the vertexes in all first characteristic fields corresponding to the target vertexes; obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex density index and the concave-convex parameter;
and adding the offset distance of the target vertex with the coordinates of the target vertex to obtain updated coordinates, and obtaining the updated coordinates for all the vertices, wherein the updated coordinates generate the three-dimensional cartoon model.
Preferably, the method for obtaining the first feature field and the second feature field of the target vertex is as follows:
all vertexes directly connected with the target vertex form a first characteristic field of the target vertex, wherein the direct connection is that the target vertex is connected with the vertex, and other vertexes are not present in a connecting line;
all vertexes indirectly connected with the target vertex form a second characteristic field of the target vertex, wherein the indirect connection is that the target vertex is directly connected with the vertex and the same vertex.
Preferably, the method for obtaining the target differential vector of the target vertex according to the difference vector of the target vertex and each vertex in the first feature field and the number of the vertices in the first feature field includes:
marking the vertex in the first characteristic neighborhood of the target vertex as a first field vertex, and marking the difference between the coordinates of the target vertex and the first field vertex as a vector between the target vertex and the first field vertex as a first vector;
taking the first vectors of the target vertex and all the first field vertexes thereof, and accumulating the first vectors as first characteristic values of the target vertex;
obtaining a first differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex thereof and the number of the first domain vertices corresponding to the target vertex;
obtaining a second differential vector of the target vertex according to the first vector of the target vertex and the vertex of the first field, the first characteristic value of the target vertex and the modulus of each first vector;
and carrying out weighted summation on the first differential vector and the second differential vector of the target vertex to obtain the target differential vector of the target vertex, wherein the weight of the first differential vector is smaller than that of the second differential vector.
Preferably, the method for obtaining the first differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex and the number of the first domain vertices corresponding to the target vertex includes:
wherein V is i Represents the ith vertex, V1 i,j The jth vertex in the first feature field representing the ith vertex, V i -V1 i,j Representing the vertex V i And vertex V1 i,j Is equal to the first vector of (1), N1 (V i ) A first set of feature fields representing the ith vertex,representing the number of vertices of the first domain corresponding to the ith vertex,/and>a first differential vector representing the ith vertex.
Preferably, the method for obtaining the second differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex thereof, the first eigenvalue of the target vertex and the modulus of each first vector comprises the following steps:
wherein V is i Represents the ith vertex, V1 i,j The jth vertex in the first feature field representing the ith vertex, V i -V1 i,j Representing the vertex V i And vertex V1 i,j V is the first vector of (V) i -V1 i,j I represents vertex V i And vertex V i,j Is the modulus of the first vector of (c), M (V i ) A first eigenvalue representing the ith vertex, N1 (V i ) A first set of feature fields representing the ith vertex, exp () represents an exponential function based on a natural constant,a second differential vector representing the ith vertex.
Preferably, the method for obtaining the three-dimensional vertex density index of the target vertex according to the euclidean distance between the vertex in the second feature field of the target vertex and the target vertex, the preset adjacent distance between the vertex in the second feature field and the number of the vertices in the second feature field includes:
taking the vertexes in the second characteristic field of the target vertexes as the second field vertexes corresponding to the target vertexes, calculating Euclidean distances between each second field vertex and all vertexes, setting a preset value k, and taking the distance between the second field vertex and the vertex closest to the k as the neighborhood distance of the second field vertex; and obtaining the maximum value in the Euclidean distance between the target vertex and the second domain vertex and the neighborhood distance between the target vertex and the second domain vertex, taking the maximum value as the target distance between the target vertex and the second domain vertex, and obtaining the three-dimensional vertex density index of the target vertex according to the target distance between the target vertex and the second domain vertex and the number of the vertices in the second characteristic domain.
Preferably, the method for obtaining the three-dimensional vertex density index of the target vertex according to the target distance between the target vertex and the vertex in the second field and the number of the vertices in the second feature field includes:
in the method, in the process of the invention,representing the number of vertices of the second domain corresponding to the ith vertex, V i Represents the ith vertex, V2 i,j A jth vertex in the second feature field representing the ith vertex, N2 (V i ) A second set of feature fields representing the ith vertex, DIS (V i,j ,V i ) Representing the vertex V i And vertex V i,j Is set to be equal to the target distance, U (V i ) Representing the three-dimensional vertex intensity index of the ith vertex.
Preferably, the method for obtaining the offset angle of each vertex according to the normal vector and the three-dimensional unit vector of each vertex and obtaining the concave-convex parameters of the target vertex according to the offset angles of the vertices in all the first feature fields corresponding to each target vertex includes:
for each vertex, acquiring an included angle between a normal vector of the vertex and a three-dimensional unit vector according to a point multiplication formula of the vector, and taking the included angle as an offset angle of the vertex;
and for each target vertex, calculating standard deviation of the offset angles of all the vertices in the first feature field as concave-convex parameters of the target vertex.
Preferably, the method for obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex density index and the concave-convex parameter comprises the following steps:
R(V i )=δ(V i )×exp(-β×U(V i )×S(V i ))
wherein V is i Represents the ith vertex, delta (V i ) A target differential vector representing the ith vertex, U (V i ) Vertex density index representing the ith vertex, S (V i ) Representing the concave-convex parameters of the ith vertex, exp () represents an exponential function based on a natural constant, β represents an adjustment coefficient, R (V i ) Representing the offset distance of the ith vertex.
In a second aspect, an embodiment of the present invention further provides a three-dimensional animation model generating system, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of any one of the foregoing three-dimensional animation model generating methods when executing the computer program.
The invention has the following beneficial effects: in the process of calculating the differential vector, different weight coefficients are set according to the far-near relationship between the vertex and the neighborhood vertex, and the two modes of calculating the differential vector are combined, so that the fitting error of a formula is reduced, and the position information of the vertex relative to surrounding vertices is accurately described. In consideration of the problem of different densities of triangular meshes in the three-dimensional animation model, the vertex density index U (V) i ) Model relief parameter S (V i ) Different offset distances are set for vertexes at different positions, so that the geometric distortion and the obvious change of the shape of the three-dimensional model caused by overlarge offset are prevented, and the finally generated three-dimensional cartoon model is finer.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for generating a three-dimensional animation model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of target distances between a target vertex and a second domain vertex;
FIG. 3 is a schematic view of the surface on which the vertices lie.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a specific implementation, structure, characteristics and effects of the three-dimensional animation model generation method according to the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
An embodiment of a three-dimensional animation model generation method comprises the following steps:
the following specifically describes a specific scheme of the three-dimensional animation model generation method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a three-dimensional animation model generation method according to an embodiment of the invention is shown, and the method includes the following steps:
and S001, generating a three-dimensional cartoon model, and acquiring coordinates of each vertex of the three-dimensional cartoon model.
Randomly generating a three-dimensional cartoon model by using a three-dimensional model generator, representing the three-dimensional cartoon model by using a three-dimensional Cartesian coordinate system, fixedly acquiring all vertex coordinates of the three-dimensional cartoon model in any direction through the three-dimensional Cartesian coordinate system, and acquiring a vertex coordinate set V= { V of the three-dimensional cartoon model 1 ,V 2 ,……V n }. In this embodiment, the three-dimensional animation models are three-dimensional models represented by a triangle network.
So far, the vertex set of the three-dimensional cartoon model is obtained.
Step S002, marking any vertex of the three-dimensional cartoon model as a target vertex, and acquiring a first characteristic field and a second characteristic field of the target vertex; and obtaining the target differential vector of each vertex according to the difference vector of the target vertex and each vertex in the first characteristic field and the number of the vertices in the first characteristic field.
The only vertex in the normal vector can amplify fine details of the surface of the three-dimensional object, such as folds, textures, concave-convex and the like, and the small change makes the three-dimensional model look more real and natural. Compared with the method of directly adding the details of the three-dimensional cartoon model, the method has the advantages that the number of the vertexes is not changed by using the details of the normal vector displacement amplification surface, and the burden of computer rendering is effectively reduced.
In order to enlarge the detailed part of the three-dimensional animation model, the position information of the vertex is firstly acquired. However, the conventional three-dimensional cartesian coordinate system only describes the positions of the vertices in the global, and cannot describe the positions of the vertices relative to surrounding vertices. Knowing only the vertex in a three-dimensional cartesian coordinate system, it is not possible to infer how the vertex should move.
To describe the position of one vertex relative to surrounding vertices, use of differential vectors is considered. The differential vector focuses more on the local information of the surface than on the cartesian coordinates describing the global spatial position.
Marking any vertex in the three-dimensional cartoon model as a target vertex, forming a first characteristic field of the target vertex by all vertexes directly connected with the target vertex, and forming a second characteristic field of the target vertex by vertexes indirectly connected with the target vertex; the direct connection is that a straight line is arranged between two vertexes, and no other vertexes exist in the straight line; the indirect connection is that two vertexes have vertexes which are directly connected together; thereby acquiring a first feature field and a second feature field for each vertex.
For the target vertex, respectively making difference between the coordinates of the target vertex and each vertex in the first characteristic field of the target vertex, marking a vector obtained by making difference between the target vertex and each vertex coordinate as a first vector, accumulating all first vectors obtained by making difference between the target vertex and the first characteristic field of the target vertex, and taking the ratio of the accumulated all first vectors of the target vertex to the number of the vertexes in the first characteristic field of the target vertex as a first differential vector of the target vertex; calculating a first vector modulus sum of the target vertex and all vertexes in the first characteristic field of the target vertex to be marked as a first characteristic value, and then enabling the ratio of the magnitude of each first vector of the target vertex to the modulus of the target vertex and the first characteristic value to obtain a second differential vector of the target vertex, wherein the formula is as follows:
wherein V is i Represents the ith vertex, V1 i,j The jth vertex in the first feature field representing the ith vertex, V i -V1 i,j Representing the vertex V i And vertex V1 i,j Is the difference between the coordinates of N1 (V i ) A first set of feature fields representing the ith vertex,representing the number of vertices in the first feature field of the ith vertex, exp () represents an exponential function based on a natural constant, |v i -V1 i,j I represents vertex V i And vertex V i,j Is the modulus of the difference between the coordinates of M (V i ) A first eigenvalue representing the ith vertex,a first differential vector representing the ith vertex,/->A second differential vector representing the ith vertex.
Wherein, in the three-dimensional cartoon model, the three-dimensional cartoon model is separated from the target vertex V i The adjacent neighborhood vertexes have larger influence on the differential vector of the vertexes, and are given larger weight. If neighborhood vertex V1 i,j From the target vertex V i Is far from the straight line, V i At V1 i,j The differential weight of the direction is smaller, the differential vector reflects the position information of the model part, and the distance between the model part and the neighborhood vertex V1 i,j The more distant neighborhood vertices are less representative of the positional information of the vertex neighborhood, and therefore are less weighted.
Obtaining a target differential vector of the target vertex according to the first differential vector and the second differential vector of the target vertex, wherein the formula is as follows:
in the method, in the process of the invention,a first differential vector representing the ith vertex,/->A second differential vector representing the ith vertex, gamma representing the weight coefficient, which in this embodiment is made 0.8,/for>Representing the target differential vector for the ith vertex.
Since the differential vector only approximately expresses three-dimensional local geometric information, the calculation result often has errors, potential errors are reduced by integrating two ways of calculating the differential vector,is more scientific and therefore has a greater weight.
Thus, the target differential vector for each vertex is obtained.
Step S003, obtaining a three-dimensional vertex density index of the target vertex according to the Euclidean distance between the vertex in the second characteristic field of the target vertex and the target vertex, the preset adjacent distance between the vertex in the second characteristic field and the number of the vertex in the second characteristic field.
In a three-dimensional animation model, a triangular mesh can be generally divided into a part representing model details and a part representing a rough pose. There are more triangular meshes in the parts of the model detail (e.g. eyes, ears of the head) to represent the detail, called dense areas for ease of description. While on the back, for example, of a three-dimensional cartoon character model, the abdomen is not as detailed as the head, less triangular mesh is required, and the three-dimensional cartoon character model is called as a non-dense area.
In the triangle mesh dense region, the three-dimensional animation model has more details in the part, and the displacement weight of the vertex needs to be reduced, because the excessive displacement can cause geometric distortion of the model, while in the non-dense region, the three-dimensional animation model has less details, and the displacement of the vertex can be properly increased.
And for each second domain vertex of the target vertex, obtaining a k neighborhood distance, namely the Euclidean distance between the second domain vertex and the vertex closest to the k, and in the embodiment, taking 6 k. And recording the Euclidean distance between the vertex of the second field and the vertex closest to the kth vertex as the neighborhood distance of the vertex of the second field. And comparing the Euclidean distance between the second domain vertex and the target vertex with the neighborhood distance between the second domain vertex and the target vertex for each second domain vertex of the target vertex, and selecting a large value as the target distance between the target vertex and the second domain vertex. And obtaining a three-dimensional vertex density index of the target vertex according to the target distance between the target vertex and all the second field vertices and the number of the second field vertices of the target vertex, wherein the formula is as follows:
in the method, in the process of the invention,representing the number of vertices of the second domain corresponding to the ith vertex, V i Represents the ith vertex, V2 i,j A jth vertex in the second feature field representing the ith vertex, N2 (V i ) A second set of feature fields representing the ith vertex, DIS (V i,j ,V i ) Representing the vertex V i And vertex V i,j Is set to be equal to the target distance, U (V i ) Representing the three-dimensional vertex intensity index of the ith vertex.
The DIS (V) i,j ,V i ) The representation of (2) is shown in FIG. 2, V i,j Jth vertex, dis, in the first feature field representing the ith vertex k (V i,j ) Representing the neighborhood distance of the jth vertex in the ith vertex's first feature area, if it corresponds to V i At a distance from less than dis k (V i,j ) When dis is ordered k (V i,j ) Represents V i And V is equal to i,j Is a target distance of (2); if it corresponds to V i At a distance from greater than dis k (V i,j ) At the time, let d 2 Represents V i And V is equal to i,j Is a target distance of (a).
In the region of dense vertices, the distribution of vertices is relatively concentrated, and vertex V i Second domain vertices and V i Is relatively small, the details of the model describing the region are more detailed, and more triangular meshes are used for representation. For a vertex V i The more the number of vertices of the second domain, the moleculesThe larger is with vertex V i The smaller the distance, the denominatorThe smaller the vertex density index U (V i ) The larger the representation the denser the vertices.
Thus, a three-dimensional vertex intensity index of each vertex is obtained.
Step S004, obtaining normal vectors of all vertexes, presetting three-dimensional unit vectors, obtaining offset angles of all vertexes according to the normal vectors and the three-dimensional unit vectors of all vertexes, and obtaining concave-convex parameters of the target vertexes according to the offset angles of all vertexes in the first characteristic field corresponding to the target vertexes; and obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex density index and the concave-convex parameter.
The larger the three-dimensional vertex intensity index for each vertex, the more vertices at that vertex, the denser the triangular mesh. In the generation of the three-dimensional cartoon model, the three-dimensional cartoon model is generated randomly at first, a plurality of vertex representations can be used in a non-dense area, and the detail of vertex descriptions is not too much in a dense area, so that the situation needs to be corrected.
First, for each vertex, the normal vector of the vertex is obtained according to its coordinates, the normal vector of the vertex is the sum of the normal vectors of the planes of the vertices, and in fig. 3, plane 1 is the uppermost plane, plane 2 is the right rear plane, plane 3 is the right front plane, plane 4 is the lowermost plane, plane 5 is the left front plane, plane 6 is the left rear plane, and vertex V j The surfaces are surface 1, surface 3 and surface 5, and three-dimensional unit vectors of the three-dimensional cartoon model are preset, and the three-dimensional unit vectors are recorded asAnd obtaining the offset angle of each vertex according to the normal vector and the three-dimensional unit vector of each vertex, wherein the formula is as follows:
in the method, in the process of the invention,normal vector representing the ith vertex, +.>Represents a three-dimensional unit vector in which the molecules are the dot product of two vectors, arccos represents a trigonometric function, θ i Representing the offset angle of the ith vertex.
Then, for each target vertex, calculating the standard deviation of the offset angles of all the vertices in the first characteristic field as the concave-convex parameters of the target vertex, wherein the formula is as follows:
S(V i )=σ(θ i,j )
in θ i,j Representing the offset angle of the jth vertex in the first feature area of the ith vertex, V i Represents the ith vertex, σ () represents the standard deviation function, S (V i ) Representing the convex-concave parameters of the ith vertex.
When the vertex V i When the concave-convex of the nearby curved surface is inconsistent, the normal vector directions of the surfaces are inconsistent, resulting in vertex V i The normal vector of the vertex of the first feature neighborhood is more disordered, and the angle standard deviation of the normal vector direction is larger. Concave-convex parameter S (V) i ) The larger the vertex V i The more various the direction of the area normal vector, the more abundant the surface, and the more likely the area representing the detail. While the roughness parameter S (V i ) The smaller the representation of the vertex V i The more likely the area is a smoother area, the less detailed is described, with the faces (curved faces) of the nearby triangular meshes facing in unison. Vertex Density index U (V) i ) The larger the relief parameter S (V i ) The larger the vertex V i The more likely the area is the detail part of the model, the offset in the normal offset process is reduced, and the phenomenon of geometric distortion of the model caused by overlarge offset of the detail area of the three-dimensional model is prevented.
Obtaining the offset distance of each vertex according to the vertex density index, the concave-convex parameters and the target differential vector of each vertex, wherein the formula is as follows:
R(V i )=δ(V i )×exp(-β×U(V i )×S(V i ))
wherein V is i Represents the ith vertex, delta (V i ) A target differential vector representing the ith vertex, U (V i ) Vertex density index representing the ith vertex, S (V i ) The concave-convex parameter representing the ith vertex, exp () represents an exponential function based on a natural constant, and β represents an adjustment coefficient, in this embodiment, 0.1, r (V i ) Representing the offset distance of the ith vertex.
In the three-dimensional animation model, the vertex density index U (V) i ) Larger, concave-convex parameter S (V i ) Larger, the offset in the differential vector is reduced. In areas with fewer details, such as the abdomen, the back and the like of the cartoon character, the offset value can be properly increased so as to better enrich details.
Thus, the offset distance of each vertex is obtained.
And S005, adding the offset distance of the target vertex and the coordinates of the target vertex to obtain updated coordinates, and obtaining the updated coordinates for all the vertices, wherein the updated coordinates generate the three-dimensional cartoon model.
After the offset distance of each vertex is obtained, the offset distance is a vector, the offset distance and the coordinates of the vertex are added to obtain new vertex coordinates, and all the vertices are updated through the offset distances to obtain a new three-dimensional cartoon model. The above operation amplifies the small variations in model vertices, amplifying the small irregularities, so that the smooth portion has more detail. The shadows under illumination are richer, and the sense of realism of the three-dimensional cartoon model is increased.
The embodiment provides a three-dimensional animation model generation system, which comprises a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor realizes the methods of the steps S001 to S005 when executing the computer program.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The three-dimensional animation model generation method is characterized by comprising the following steps of:
generating a three-dimensional cartoon model, and acquiring coordinates of each vertex of the three-dimensional cartoon model;
marking any vertex of the three-dimensional cartoon model as a target vertex, and acquiring a first characteristic field and a second characteristic field of the target vertex; obtaining a target differential vector of the target vertex according to the difference vector of the target vertex and each vertex in the first characteristic field and the number of the vertexes in the first characteristic field;
acquiring a three-dimensional vertex density index of the target vertex according to the Euclidean distance between the vertex in the second characteristic field of the target vertex and the target vertex, the preset adjacent distance between the vertex in the second characteristic field and the number of the vertex in the second characteristic field;
obtaining normal vectors of all vertexes, presetting three-dimensional unit vectors, obtaining offset angles of all vertexes according to the normal vectors of all vertexes and the three-dimensional unit vectors, and obtaining concave-convex parameters of the target vertexes according to the offset angles of the vertexes in all first characteristic fields corresponding to the target vertexes; obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex density index and the concave-convex parameter;
and adding the offset distance of the target vertex with the coordinates of the target vertex to obtain updated coordinates, and obtaining the updated coordinates for all the vertices, wherein the updated coordinates generate the three-dimensional cartoon model.
2. The method for generating a three-dimensional animation model as claimed in claim 1, wherein the method for obtaining the first feature field and the second feature field of the target vertex comprises:
all vertexes directly connected with the target vertex form a first characteristic field of the target vertex, wherein the direct connection is that the target vertex is connected with the vertex, and other vertexes are not present in a connecting line;
all vertexes indirectly connected with the target vertex form a second characteristic field of the target vertex, wherein the indirect connection is that the target vertex is directly connected with the vertex and the same vertex.
3. The method for generating a three-dimensional animation model as claimed in claim 1, wherein the method for obtaining the target differential vector of the target vertex according to the differential vector of the target vertex and each vertex in the first feature field and the number of vertices in the first feature field comprises the following steps:
marking the vertex in the first characteristic neighborhood of the target vertex as a first field vertex, and marking the difference between the coordinates of the target vertex and the first field vertex as a vector between the target vertex and the first field vertex as a first vector;
taking the first vectors of the target vertex and all the first field vertexes thereof, and accumulating the first vectors as first characteristic values of the target vertex;
obtaining a first differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex thereof and the number of the first domain vertices corresponding to the target vertex;
obtaining a second differential vector of the target vertex according to the first vector of the target vertex and the vertex of the first field, the first characteristic value of the target vertex and the modulus of each first vector;
and carrying out weighted summation on the first differential vector and the second differential vector of the target vertex to obtain the target differential vector of the target vertex, wherein the weight of the first differential vector is smaller than that of the second differential vector.
4. The method for generating a three-dimensional animation model as claimed in claim 3, wherein the method for obtaining the first differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex corresponding to the target vertex comprises:
wherein V is i Represents the ith vertex, V1 i,j The jth vertex in the first feature field representing the ith vertex, V i -V1 i,j Representing the vertex V i And vertex V1 i,j Is equal to the first vector of (1), N1 (V i ) A first set of feature fields representing the ith vertex,representing the number of vertices of the first domain corresponding to the ith vertex,/and>a first differential vector representing the ith vertex.
5. The method for generating a three-dimensional animation model as claimed in claim 3, wherein the method for obtaining the second differential vector of the target vertex according to the first vector of the target vertex and the first domain vertex thereof, the first eigenvalue of the target vertex and the modulus of each first vector comprises:
wherein V is i Represents the ith vertex, V1 i,j The jth vertex in the first feature field representing the ith vertex, V i -V1 i,j Representing the vertex V i And vertex V1 i,j V is the first vector of (V) i -V1 i,j I represents vertex V i And vertex V i,j Is the modulus of the first vector of (c), M (V i ) A first eigenvalue representing the ith vertex, N1 (V i ) A first set of feature fields representing the ith vertex, exp () represents an exponential function based on a natural constant,a second differential vector representing the ith vertex.
6. The method for generating a three-dimensional animation model according to claim 1, wherein the method for obtaining the three-dimensional vertex intensity index of the target vertex according to the euclidean distance between the vertex in the second feature field of the target vertex and the target vertex, the preset adjacent distance between the vertex in the second feature field, and the number of the vertices in the second feature field comprises:
taking the vertexes in the second characteristic field of the target vertexes as the second field vertexes corresponding to the target vertexes, calculating Euclidean distances between each second field vertex and all vertexes, setting a preset value k, and taking the distance between the second field vertex and the vertex closest to the k as the neighborhood distance of the second field vertex; and obtaining the maximum value in the Euclidean distance between the target vertex and the second domain vertex and the neighborhood distance between the target vertex and the second domain vertex, taking the maximum value as the target distance between the target vertex and the second domain vertex, and obtaining the three-dimensional vertex density index of the target vertex according to the target distance between the target vertex and the second domain vertex and the number of the vertices in the second characteristic domain.
7. The method for generating a three-dimensional animation model as claimed in claim 6, wherein the method for obtaining the three-dimensional vertex intensity index of the target vertex according to the target distance between the target vertex and the vertex in the second domain and the number of the vertices in the second feature domain comprises:
in the method, in the process of the invention,representing the number of vertices of the second domain corresponding to the ith vertex, V i Represents the ith vertex, V2 i,j A jth vertex in the second feature field representing the ith vertex, N2 (V i ) A second set of feature fields representing the ith vertex, DIS (V i,j ,V i ) Representing the vertex V i And vertex V i,j Is set to be equal to the target distance, U (V i ) Representing the three-dimensional vertex intensity index of the ith vertex.
8. The method for generating a three-dimensional animation model according to claim 1, wherein the method for obtaining the offset angle of each vertex according to the normal vector and the three-dimensional unit vector of each vertex and obtaining the concave-convex parameters of the target vertex according to the offset angles of the vertices in all the first feature fields corresponding to each target vertex comprises the following steps:
for each vertex, acquiring an included angle between a normal vector of the vertex and a three-dimensional unit vector according to a point multiplication formula of the vector, and taking the included angle as an offset angle of the vertex;
and for each target vertex, calculating standard deviation of the offset angles of all the vertices in the first feature field as concave-convex parameters of the target vertex.
9. The method for generating a three-dimensional animation model according to claim 1, wherein the method for obtaining the offset distance of the target vertex according to the target differential vector of the target vertex, the three-dimensional vertex intensity index and the concave-convex parameter comprises the steps of:
R(V i )=δ(V i )×exp(-β×U(V i )×S(V i ))
wherein V is i Represents the ith vertex, delta (V i ) A target differential vector representing the ith vertex, U (V i ) Vertex density index representing the ith vertex, S (V i ) Representing the concave-convex parameters of the ith vertex, exp () represents an exponential function based on a natural constant, β represents an adjustment coefficient, R (V i ) Representing the offset distance of the ith vertex.
10. A three-dimensional animation model generation system comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor, when executing the computer program, implements the steps of a three-dimensional animation model generation method according to any of claims 1-9.
CN202311538906.4A 2023-11-17 2023-11-17 Three-dimensional animation model generation method and system Active CN117437362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311538906.4A CN117437362B (en) 2023-11-17 2023-11-17 Three-dimensional animation model generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311538906.4A CN117437362B (en) 2023-11-17 2023-11-17 Three-dimensional animation model generation method and system

Publications (2)

Publication Number Publication Date
CN117437362A true CN117437362A (en) 2024-01-23
CN117437362B CN117437362B (en) 2024-06-21

Family

ID=89548011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311538906.4A Active CN117437362B (en) 2023-11-17 2023-11-17 Three-dimensional animation model generation method and system

Country Status (1)

Country Link
CN (1) CN117437362B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632394A (en) * 2013-12-02 2014-03-12 江苏科技大学 Model simplification method with feature keeping function
US20170323443A1 (en) * 2015-01-20 2017-11-09 Indian Institute Of Technology, Bombay Systems and methods for obtaining 3-d images from x-ray information
WO2020034785A1 (en) * 2018-08-16 2020-02-20 Oppo广东移动通信有限公司 Method and device for processing three-dimensional model
WO2021042277A1 (en) * 2019-09-03 2021-03-11 浙江大学 Method for acquiring normal vector, geometry and material of three-dimensional object employing neural network
CN113450443A (en) * 2021-07-08 2021-09-28 网易(杭州)网络有限公司 Rendering method and device of sea surface model
CN113570634A (en) * 2020-04-28 2021-10-29 北京达佳互联信息技术有限公司 Object three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114373056A (en) * 2021-12-17 2022-04-19 云南联合视觉科技有限公司 Three-dimensional reconstruction method and device, terminal equipment and storage medium
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 A method and system for target detection and tracking based on multi-dimensional point cloud features
CN116152454A (en) * 2023-02-15 2023-05-23 中铁水利信息科技有限公司 Water conservancy real-time monitoring management system based on GIS and three-dimensional modeling

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632394A (en) * 2013-12-02 2014-03-12 江苏科技大学 Model simplification method with feature keeping function
US20170323443A1 (en) * 2015-01-20 2017-11-09 Indian Institute Of Technology, Bombay Systems and methods for obtaining 3-d images from x-ray information
WO2020034785A1 (en) * 2018-08-16 2020-02-20 Oppo广东移动通信有限公司 Method and device for processing three-dimensional model
WO2021042277A1 (en) * 2019-09-03 2021-03-11 浙江大学 Method for acquiring normal vector, geometry and material of three-dimensional object employing neural network
CN113570634A (en) * 2020-04-28 2021-10-29 北京达佳互联信息技术有限公司 Object three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113450443A (en) * 2021-07-08 2021-09-28 网易(杭州)网络有限公司 Rendering method and device of sea surface model
CN114373056A (en) * 2021-12-17 2022-04-19 云南联合视觉科技有限公司 Three-dimensional reconstruction method and device, terminal equipment and storage medium
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 A method and system for target detection and tracking based on multi-dimensional point cloud features
CN116152454A (en) * 2023-02-15 2023-05-23 中铁水利信息科技有限公司 Water conservancy real-time monitoring management system based on GIS and three-dimensional modeling

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JEONG JOON PARK ET AL.: "DeepSDF : Learning Continuous Signed Distance Functions for Shape Representation", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, 9 January 2020 (2020-01-09), pages 165 - 174 *
L\'UBOR LADICKÝ ET AL.: "From Point Clouds to Mesh Using Regression", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, 25 December 2017 (2017-12-25), pages 3913 - 3922 *
任静丽等: "动漫设计"无纸化"的一种三维解决思路", 《计算机与数字工程》, no. 03, 20 March 2008 (2008-03-20), pages 98 - 99 *
刘建高等: "基于拓扑分析的三维动漫人物造型重构", 《计算机与现代化》, no. 10, 15 October 2020 (2020-10-15), pages 76 - 80 *

Also Published As

Publication number Publication date
CN117437362B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN114037802B (en) Three-dimensional face model reconstruction method, device, storage medium and computer equipment
CN113592711B (en) Three-dimensional reconstruction method, system, equipment and storage medium for non-uniform point cloud data
Kobourov et al. Non-Eeuclidean spring embedders
CN101339669A (en) 3D Face Modeling Method Based on Frontal Silhouette Image
CN101404091A (en) Three-dimensional human face reconstruction method and system based on two-step shape modeling
CN101853523A (en) A Method of Creating 3D Human Face Model Using Sketch
CN114842136A (en) Single-image three-dimensional face reconstruction method based on differentiable renderer
CN118229885A (en) Incremental 3D reconstruction method based on fusion of voxel octree and mesh rendering
CN117083638A (en) Accelerated Neural Radiation Fields for View Synthesis
CN118429528A (en) Method, device and equipment for three-dimensional reconstruction of scene
CN113343328B (en) Efficient closest point projection method based on improved Newton iteration
KR100414058B1 (en) Remeshing optimizing method and apparatus for polyhedral surface
Huang et al. Nerf-texture: Synthesizing neural radiance field textures
CN103383778B (en) A kind of three-dimensional cartoon human face generating method and system
CN106960469B (en) A Smooth Free Deformation Method for Fast Triangle Segmentation
CN117437362B (en) Three-dimensional animation model generation method and system
CN115908635A (en) Texture Generation Method Based on Circle Packing
CN109410333A (en) A kind of super dough sheet cluster generation method of high quality
CN106709977B (en) An automatic arrangement method of light sources based on scene night scene images
CN114119928A (en) Grid operation-based lung organ three-dimensional model optimization method and system
CN113112596A (en) Face geometric model extraction and 3D face reconstruction method, device and storage medium
CN113658323A (en) Method for reconstructing network model in three dimensions based on graph convolution and computer equipment
CN116452620B (en) A human point cloud data segmentation method based on radial basis function
CN110322548A (en) A kind of three-dimensional grid model generation method based on several picture parametrization
CN109801367B (en) Grid model characteristic editing method based on compressed manifold mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant