CN112232310B - Face recognition system and method for expression capture - Google Patents
Face recognition system and method for expression capture Download PDFInfo
- Publication number
- CN112232310B CN112232310B CN202011425666.3A CN202011425666A CN112232310B CN 112232310 B CN112232310 B CN 112232310B CN 202011425666 A CN202011425666 A CN 202011425666A CN 112232310 B CN112232310 B CN 112232310B
- Authority
- CN
- China
- Prior art keywords
- feature
- dimensional coordinate
- dimensional
- active
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a face recognition system and a face recognition method for expression capture, wherein the face recognition system comprises a face image acquisition system and a face feature recognition module; the real three-dimensional face reconstruction system is used for establishing a three-dimensional face model according to the structured light imaging principle of the face image acquisition system and simultaneously re-establishing a three-dimensional coordinate system of the three-dimensional face model; the virtual character three-dimensional model system is used for establishing a three-dimensional virtual outline in the same three-dimensional coordinate system and filling virtual characteristic parts in the three-dimensional virtual outline; the gesture matching module is used for selecting a plurality of active selection feature points of each facial feature part and determining a proportional relation; the characteristic point association module is used for determining passively selected characteristic points according to the proportional relation; the expression dynamic tracking module determines the three-dimensional coordinate change of the passively selected feature points of the virtual feature part in a three-dimensional coordinate system according to the proportion relation of the actively selected feature points; the method ensures the similarity of the facial features of the real face and the virtual character through the face recognition technology.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition system and method for expression capture.
Background
Face recognition is a biometric technology based on the identification of the face feature information of a person, and uses a camera or a video camera to collect images or video streams containing a face, and automatically detects and tracks the face in the images, and then carries out a series of related technologies of face recognition on the detected face, and a face recognition system mainly comprises four components, which are respectively: the method comprises the steps of face image acquisition and detection, face image preprocessing, face image feature extraction, matching and identification.
Face recognition is not only used for identity recognition at present, but face recognition technology is widely used in daily entertainment life, such as camera shooting, picture comparison and the like, for example, face comparison technology is used for testing the similarity degree of the male and female principals and faces, and if the face recognition technology is combined with expression capture technology, cartoon characters which are similar to the principals and faces of the owners and have the same facial expressions are created.
However, the conventional face recognition system has the following defects: the face recognition system mostly acquires a face image through face image acquisition and detection, then performs preprocessing such as light compensation, gray level conversion, histogram equalization, normalization, geometric correction, filtering and sharpening on the image, and then acquires static features of the face image through a face image feature extraction module, so that the traditional face recognition technology is used for recognizing facial contours and facial features, cannot realize expression capture, can only compare the similarity of two static images, and further cannot create an animation character which is similar to the public face of a host and has the same facial expression.
Disclosure of Invention
The invention aims to provide a face recognition system and a face recognition method for expression capture, and aims to solve the technical problems that the traditional face recognition technology in the prior art is used for recognizing facial contours and facial features, expression capture cannot be realized, and therefore cartoon characters which are similar to the male face of a master and have the same facial expression cannot be created.
In order to solve the technical problems, the invention specifically provides the following technical scheme:
a face recognition system for expression capture, comprising:
the human face image acquisition system comprises a camera shooting unit and a structured light acquisition unit, wherein the camera shooting unit is used for shooting a dynamic image of a human face, and the structured light acquisition unit is used for acquiring the numerical value and the change of an optical signal of a human face in the dynamic image through a structured light imaging principle to capture facial expressions;
the facial feature recognition module is used for statically intercepting the dynamic image shot by the camera shooting unit and determining facial contour features and facial features through the human face image feature extraction unit;
the real three-dimensional face reconstruction system is used for establishing a three-dimensional face model according to the structured light imaging principle of the structured light acquisition unit, simultaneously re-establishing a three-dimensional coordinate system of the three-dimensional face model, determining the distribution positions of a plurality of facial feature parts according to the numerical values of optical signals of the human face acquired by the structured light acquisition unit, and refining the facial contour features and the facial features of the facial feature parts according to the facial contour features and the facial features acquired by the facial feature recognition module;
the virtual character three-dimensional model system is used for establishing a three-dimensional virtual contour on the basis of a three-dimensional coordinate system of the real three-dimensional face reconstruction system, filling virtual feature parts in the three-dimensional virtual contour and determining the one-to-one correspondence between the facial feature parts and the virtual feature parts;
the posture matching module is used for selecting a plurality of active selection feature points of each facial feature part, establishing an active-passive relation that the facial feature parts actively pull the virtual feature parts to move, and determining a proportional relation between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part;
the feature point association module is used for determining a passive selection feature point corresponding to each active selection feature point in the virtual feature part according to the proportional relation between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part;
and the expression dynamic tracking module is used for determining the three-dimensional coordinate change value of the passively selected characteristic point of the virtual characteristic part in the three-dimensional coordinate system according to the three-dimensional coordinate change value of the actively selected characteristic point of the facial characteristic part in the three-dimensional coordinate system and the proportional relation of the actively selected characteristic point so as to realize animation simulation.
As a preferred scheme of the present invention, the present invention further includes a feature point selecting radiation module, configured to select an active radiation feature point according to an influence range of each active selection feature point, and determine a functional relationship between the active selection feature point and the active radiation feature points in different influence ranges according to a driving capability of the active radiation feature point with a different distance from the active selection feature point;
the feature point association module determines passive radiation feature points corresponding to the active radiation feature points according to the proportional relationship between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part, and the functional relationship between the passive selection feature points and the passive radiation feature points is the same as the functional relationship between the active selection feature points and the active radiation feature points;
and the expression dynamic tracking module determines the three-dimensional coordinate change value of the active radiation characteristic point according to a functional relation, and determines the three-dimensional coordinate change value of the passive radiation characteristic point of the virtual characteristic part in a three-dimensional coordinate system according to the active radiation characteristic point according to a proportional relation so as to accurately realize animation simulation.
As a preferable aspect of the present invention, the facial image acquisition system divides the optical signals into static signals corresponding to no motion of the face and dynamic signals corresponding to the work of the face, the real three-dimensional face reconstruction system establishes the three-dimensional face model according to the static signals and determines the distribution positions of the facial feature parts according to the optical signal parameter differences of all the static signals, wherein,
the real three-dimensional face reconstruction system reestablishes the three-dimensional coordinate system of the three-dimensional face model in the following implementation mode:
establishing a two-dimensional coordinate system by taking a perpendicular bisector of a connecting line of eye feature parts of the three-dimensional face model as a Y axis and taking a transverse connecting line which is arranged between ear feature parts and is directly crossed with the perpendicular bisector as an X axis;
taking the origin of a two-dimensional coordinate system as a starting point, making a vertical line inside the three-dimensional face model as a Z axis, and finally establishing a three-dimensional coordinate system;
and the three-dimensional coordinate values (x, y, z) of the active selection feature point and the active radiation feature point of the facial feature part are respectively the distance between the active selection feature point and a transverse connecting line intersected with the perpendicular bisector, the distance between the active selection feature point and the perpendicular bisector and the concave-convex position of the face corresponding to the depth value of the optical signal.
As a preferred embodiment of the present invention, the passive selection feature points and the passive radiation feature points of the virtual feature portion determine corresponding three-dimensional coordinate values (x ', y ', z ') according to the proportional relationship and the functional relationship.
As a preferable aspect of the present invention, the whole of the real three-dimensional face reconstruction system and the virtual character three-dimensional model system is further nested in a second three-dimensional coordinate system, and the second three-dimensional coordinate system is used for realizing head movements of the real three-dimensional face reconstruction system and the virtual character three-dimensional model system.
As a preferred embodiment of the present invention, the system further includes a data pool tracking module, when the optical signal is a static signal and the pose matching module and the feature point associating module complete initialization of proportional relationship and functional relationship, the data pool tracking module is configured to store original three-dimensional coordinate values of the actively selected feature point and the actively radiated feature point, and original three-dimensional coordinate values of the passively selected feature point and the passively radiated feature point, and when the optical signal is a dynamic signal representing face work, the data pool tracking module is configured to change the original three-dimensional coordinate values of the actively selected feature point and the actively radiated feature point in real time, and change the three-dimensional coordinate values of the passively selected feature point and the passively radiated feature point in real time by using a matching relationship between the original three-dimensional coordinate values.
As a preferred scheme of the present invention, the feature point selecting and radiating module is configured to select an active radiating feature point corresponding to each active selection feature point as a dynamic signal, an influence range of the active selection feature point as the dynamic signal is proportional to a change value of x, y of the active selection feature point, and the influence range of the active selection feature point determines a change amplitude of a coordinate value of the active selection feature point through a direct proportional function;
and the variation amplitude of the x and y coordinate values of different active radiation characteristic points is inversely proportional to the distance between the active radiation characteristic point and the active selection characteristic point, and the variation amplitude of the x and y coordinate values of the active radiation characteristic point is determined by the distance between the active radiation characteristic point and the active selection characteristic point through an inverse proportion function.
As a preferable aspect of the present invention, when there are two adjacent active radiation feature points whose influence ranges overlap, coordinate values of x1 and y1 of the active radiation feature points change by an amount equal to a superposition result of inverse proportional function values corresponding to the two active radiation feature points.
In order to solve the above technical problems, the present invention further provides the following technical solutions: an expression capture method for generating an animation, comprising the steps of:
step 100, acquiring a dynamic image of a human face by using a camera shooting unit, acquiring facial contour features and facial features by using a human face recognition technology, capturing facial expression changes of the dynamic image of the human face by using a structured light method, and determining distribution positions of different facial feature parts in a three-dimensional face model;
200, reconstructing a three-dimensional coordinate system of the three-dimensional face model relative to the face characteristic part, and determining an original three-dimensional coordinate value in the three-dimensional coordinate system when the face characteristic part is static;
step 300, reconstructing a virtual character three-dimensional model system in the three-dimensional coordinate system, and establishing a proportional relation between each facial feature part and the corresponding virtual feature part in the three-dimensional coordinate system so as to complete the initialization matching work of the facial feature parts and the virtual feature parts;
step 400, selecting active selection feature points of the facial feature parts, and matching and selecting passive selection feature points of the virtual feature parts according to the proportional relation between the facial feature parts and the virtual feature parts;
step 500, capturing the change of the three-dimensional coordinate value of the active selection feature point in real time, and determining the change of the three-dimensional coordinate value of the passive selection feature point according to the proportional relation between the active selection feature point and the passive selection feature point so as to enable the animation image to track the facial expression change.
As a preferred scheme of the present invention, an influence range of the actively selected feature point is determined with the actively selected feature point as a center, a plurality of active radiation feature points are selected within the influence range, a change in three-dimensional coordinate values of the active radiation feature point driven by the actively selected feature point is determined by using an influence range function and an influence amplitude function, and the passively selected feature point determines a passive radiation feature point and a change in three-dimensional coordinate values of the passive radiation feature point in the same manner;
and storing original three-dimensional coordinate values of the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point by using a data pool tracking module, establishing a matching relation of the original three-dimensional coordinate values, and determining the corresponding relation among the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point in real time by taking the original three-dimensional coordinate values as marks when the original three-dimensional coordinate values corresponding to the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point are changed, so as to change the three-dimensional coordinate values of the passive selection feature point and the passive radiation feature point in real time according to a proportional relation.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method establishes the coordinate values of the facial feature parts, such as eyes, eyebrows, mouths, noses, ears and other structures in the three-dimensional coordinate system and the coordinate values of the virtual feature structures in the three-dimensional coordinate system, and determines the similarity of the facial feature parts and the corresponding virtual feature structures through the limitation of a face recognition technology, thereby ensuring the similarity of the facial features of a real face and virtual characters;
(2) the invention respectively establishes the proportional relation between each facial feature part and the corresponding virtual feature structure and the proportional relation between the real eyes and the virtual eyes on the animation, thereby improving the accuracy of the animation character in capturing the expression, unifying the expression amplitude of the animation virtual character with the real facial expression and reducing the expression error between the expression capture and the real expression of the animation virtual character.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
FIG. 1 is a block diagram of an expression capture and animation automatic generation system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an expression capturing and animation automatic generation method according to an embodiment of the present invention.
The reference numerals in the drawings denote the following, respectively:
1-a face image acquisition system; 2-a real three-dimensional face reconstruction system; 3-a virtual character three-dimensional model system; 4-attitude matching module; 5-feature point association module; 6-an expression dynamic tracking module; 7-characteristic point selecting radiation module; 8-a data pool tracking module; 9-facial feature recognition module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides a face recognition system for expression capture, which conventionally links expression capture with animation generation to flexibly graft a real facial expression to an animated virtual character, but because the corresponding relationship between the real facial expression and a virtual facial feature is not clear, the expression amplitude of the animated virtual character is not uniform, and the variation amplitudes of different positions of the facial feature are not in one-to-one correspondence, the expression of the animated virtual character may have a problem of large expression capture error.
In order to solve the above problem, the expression capture and animation automatic generation system provided in this embodiment specifically includes:
the face image acquisition system 1 is divided into a camera shooting unit and a structured light acquisition unit, wherein the camera shooting unit is used for shooting a dynamic image of a face, and the structured light acquisition unit is used for acquiring the numerical value and the change of an optical signal of a human face in the dynamic image through a structured light imaging principle to capture facial expressions.
And the facial feature recognition module 9 is used for statically intercepting the dynamic image shot by the camera shooting unit and determining facial contour features and facial features through the facial image feature extraction unit.
It should be added that the hardware of the structured light three-dimensional imaging mainly comprises a camera and a projector, the structured light is active structure information projected to the surface of a measured object through the projector, and then the measured surface is shot through a single or a plurality of cameras to obtain a structured light image; and finally, three-dimensional reconstruction is realized through image three-dimensional analysis and calculation based on the triangulation principle.
The real three-dimensional face reconstruction system 2 is used for establishing a three-dimensional face model according to the structured light imaging principle of the structured light acquisition unit, simultaneously re-establishing a three-dimensional coordinate system of the three-dimensional face model, determining the distribution positions of a plurality of facial feature parts according to the values of the optical signals of the human face acquired by the structured light acquisition unit, and refining the facial contour features and the facial features of the facial feature parts according to the facial contour features and the facial features acquired by the facial feature recognition module.
The virtual character three-dimensional model system 3 is used for establishing a three-dimensional virtual contour on the basis of a three-dimensional coordinate system of the real three-dimensional face reconstruction system 2, filling virtual feature parts in the three-dimensional virtual contour and determining the one-to-one correspondence between the face feature parts and the virtual feature parts;
in addition, it should be noted that the real three-dimensional face reconstruction system 2 and the virtual character three-dimensional model system 3 are also integrally nested in the second three-dimensional coordinate system, and the second three-dimensional coordinate system is used for realizing the head movements of the real three-dimensional face reconstruction system 2 and the virtual character three-dimensional model system 3.
The virtual character three-dimensional model system 3 is the same as the three-dimensional coordinate system used by the real three-dimensional face reconstruction system 2, so that when the real face swings left and right or twists, the virtual character three-dimensional model system 3 can synchronously rotate along with the real three-dimensional face reconstruction system 2, and therefore the embodiment can not only realize facial expression capture and synchronous animation demonstration, but also realize synchronous mirror image movement of animation and the real head.
The posture matching module 4 is used for selecting a plurality of active selection feature points of each facial feature part, establishing an active and passive relation of the facial feature parts actively pulling the virtual feature parts to move, and determining a proportional relation between a three-dimensional coordinate value of each facial feature part and a three-dimensional coordinate value of the corresponding virtual feature part;
the feature point association module 5 is configured to determine a passively selected feature point corresponding to each actively selected feature point in the virtual feature portion according to a proportional relationship between a three-dimensional coordinate value of each facial feature portion and a three-dimensional coordinate value of a corresponding virtual feature portion;
and the expression dynamic tracking module 6 is used for determining the three-dimensional coordinate change value of the passively selected feature point of the virtual feature point in the three-dimensional coordinate system according to the proportion relation of the actively selected feature point according to the three-dimensional coordinate change value of the actively selected feature point of the facial feature point in the three-dimensional coordinate system so as to realize animation simulation.
Because the distribution of the animated character and the facial features of the real head are not completely in one-to-one correspondence, for example, the expression change of the human face is animated by using the rabbit face, and then the positions of the virtual features and the real facial features in the three-dimensional coordinate system are not set according to the same proportional relationship, so that each facial feature (for example, eyes, mouth and eyebrows) needs to be respectively matched in a position-to-position correspondence manner, and then the position change of each facial feature is synchronously demonstrated by using the animated character to capture the facial expression.
Meanwhile, expression capture and animation demonstration can be realized through matching of the feature points and the active and passive traction relation and arc lubrication treatment between two adjacent feature points of the same facial feature part.
In addition, the above-mentioned animation simulation operation is only suitable for tracking facial feature positions, such as eyeball, mouth and eyebrow movement expressions, and cannot simulate the change of the muscle part of the face.
The characteristic point selecting and radiating module 7 is used for selecting active radiating characteristic points according to the influence range of each active selecting characteristic point and determining the functional relation between the active selecting characteristic points and the active radiating characteristic points with different influence ranges according to the driving capability of the active radiating characteristic points with different distances from the active selecting characteristic points;
the feature point association module 5 determines a passive radiation feature point corresponding to the active radiation feature point according to a proportional relationship between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part, and a functional relationship between the passive selection feature point and the passive radiation feature point is the same as a functional relationship between the active selection feature point and the active radiation feature point;
and the expression dynamic tracking module 6 determines the three-dimensional coordinate change value of the active radiation characteristic point according to the functional relation, and determines the three-dimensional coordinate change value of the passive radiation characteristic point of the virtual characteristic part in the three-dimensional coordinate system according to the proportional relation so as to accurately realize animation simulation.
When the real face makes different expressions, muscles near the characteristic parts can change along with the change of the characteristic parts, for example, when the real face makes surprising expressions, the positions and the sizes of the characteristic parts change, and meanwhile, the muscle characteristics of the face also change.
In order to capture the human expression in a more detailed and realistic manner, in the embodiment, the feature point selection radiation module 7 is used to firstly select the active radiation feature point in the influence range of the actively selected feature point, and then the influence range corresponding to the passively selected feature point and the three-dimensional coordinate value of the passively radiated feature point, that is, the selected position of the passively radiated feature point, are determined by using the same proportional relationship between the actively selected feature point and the passively selected feature point.
The human face image acquisition system 1 divides the optical signals into static signals corresponding to no action of the face and dynamic signals corresponding to the work of the face, the real three-dimensional face reconstruction system 2 establishes a three-dimensional face model according to the static signals and determines the distribution positions of facial feature parts according to the optical signal parameter difference of all the static signals, wherein,
the real three-dimensional face reconstruction system 2 re-establishes the three-dimensional coordinate system of the three-dimensional face model in the following manner:
establishing a two-dimensional coordinate system by taking a perpendicular bisector of a connecting line of eye feature parts of the three-dimensional face model as a Y axis and taking a transverse connecting line which is arranged between ear feature parts and directly intersects with the perpendicular bisector as an X axis;
taking the origin of the two-dimensional coordinate system as a starting point, making a vertical line inside the three-dimensional face model as a Z axis, and finally establishing a three-dimensional coordinate system;
the three-dimensional coordinate values (x, y, z) of the active selection feature point and the active radiation feature point of the facial feature part are respectively the distance between the active selection feature point and a transverse connecting line intersected with the perpendicular bisector, the distance between the active selection feature point and the perpendicular bisector and a facial concave-convex value corresponding to the depth value of the optical signal.
The passive selection feature points and the passive radiation feature points of the virtual feature points determine corresponding three-dimensional coordinate values (x ', y', z ') according to a proportional relationship and a functional relationship, and it should be noted that, because the virtual character three-dimensional model system 3 establishes a three-dimensional virtual contour on the basis of the three-dimensional coordinate system of the real three-dimensional face reconstruction system 2, the face concave-convex value of the virtual face feature in the three-dimensional virtual contour is not completely the same as or completely follows the same proportion as the face concave-convex value of the real face feature point, and therefore, the proportional relationship between the three-dimensional coordinate values (x, y, z) of each active selection feature point and each active radiation feature point and the three-dimensional coordinate values (x', y ', z') of the corresponding passive selection feature points and passive radiation feature points is different.
Determining the z' coordinate values of the passively selected characteristic points and the passively radiated characteristic points according to the specific proportional relation by the z coordinate values of each actively selected characteristic point and each actively radiated characteristic point; determining the x' coordinate values of the passively selected characteristic points and the passively radiated characteristic points according to the specific proportional relation by the x coordinate values of each actively selected characteristic point and each actively radiated characteristic point; and determining the y' coordinate values of the passively selected characteristic points and the passively radiated characteristic points according to the specific proportional relation by the y coordinate values of each actively selected characteristic point and each actively radiated characteristic point.
And meanwhile, the functional relationship between each active selection characteristic point and different active radiation characteristic points in the corresponding influence range is not completely the same.
The characteristic point selecting and radiating module 7 is used for selecting active radiating characteristic points corresponding to each active selection characteristic point as a dynamic signal, the influence range of the active selection characteristic points as the dynamic signal is in direct proportion to the change values of x and y of the active selection characteristic points, and the influence range of the active selection characteristic points determines the coordinate value change amplitude of the active selection characteristic points through a direct proportional function;
and the coordinate value change amplitudes of x1 and y1 of different active radiation characteristic points are inversely proportional to the distance between the active radiation characteristic point and the active selection characteristic point, and the coordinate value change amplitudes of x1 and y1 of the active radiation characteristic point are determined by the distance between the active radiation characteristic point and the active selection characteristic point through an inverse proportion function.
That is, the coordinate value (x 1, y 1) = k × m (x, y) of the active radiation feature point, where k is the variation amplitude of the active selection feature point, and m is the position amplitude corresponding to the position distance from the active radiation feature point to the active selection feature point.
When the influence ranges of two adjacent active selection characteristic points have overlapped active radiation characteristic points, the coordinate values of x1 and y1 of the active radiation characteristic points have the change amplitude equal to the superposition result of inverse proportion function values corresponding to the two active selection characteristic points.
In addition, in this embodiment, the data pool tracking module 8 determines a one-to-one matching relationship among the active selection feature point, the active radiation feature point, the passive selection feature point, and the passive radiation feature point, which are tracked to change when the expression changes, and then, according to the changed three-dimensional coordinate values of the active selection feature point and the active radiation feature point, the corresponding passive selection feature point and the passive radiation feature point are quickly found, and according to the proportional relationship, the three-dimensional coordinate values of the passive selection feature point and the passive radiation feature point are synchronously changed.
When the optical signal is a static signal and the pose matching module 4 and the feature point association module 5 complete the initialization of the proportional relationship and the functional relationship, the data pool tracking module 8 is configured to store the original three-dimensional coordinate values of the actively selected feature point and the actively radiated feature point and the original three-dimensional coordinate values of the passively selected feature point and the passively radiated feature point, and when the optical signal is a dynamic signal representing the face work, the data pool tracking module 8 is configured to change the original three-dimensional coordinate values of the actively selected feature point and the actively radiated feature point in real time and change the three-dimensional coordinate values of the passively selected feature point and the passively radiated feature point in real time by using the matching relationship between the original three-dimensional coordinate values.
In the present embodiment, the selection of the active radiation feature points is affected by the correspondence between the facial feature points and the facial muscles around the facial feature points, the distances between the facial muscles and the facial feature points, and the variation range of the facial feature points, in general, the larger the variation range of the facial feature points is, the larger the range of the selected active radiation feature points is, and the closer the distance between the active radiation feature points and the facial feature points is, the larger the variation value of the three-dimensional coordinate values of the active radiation feature points is.
Therefore, in the embodiment, the selected number of the active radiation feature points is determined according to the variation amplitude of the active selection feature points selected from the facial feature part, and the three-dimensional coordinate value of the active radiation feature points is further determined according to the distance between the selected active radiation feature points and the active selection feature points;
and then, according to the proportional relation between the active selected feature point and the passive selected feature point and the proportional relation between the active radiation feature point and the passive radiation feature point, determining the three-dimensional coordinate values of the passive selected feature point and the passive radiation feature point, so that the muscle change of the real facial expression change is synchronously displayed on the face of the animation virtual character, and the flexibility and the accuracy of the expression change of the animation character are improved.
As shown in fig. 2, the present invention also provides an expression capturing method for generating an animation, comprising the steps of:
step 100, acquiring a dynamic image of a human face by using a camera shooting unit, acquiring facial contour features and facial features by using a human face recognition technology, capturing facial expression changes of the dynamic image of the human face by using a structured light method, and determining distribution positions of different facial feature parts in a three-dimensional face model;
200, reconstructing a three-dimensional coordinate system of the three-dimensional face model relative to the face characteristic part, and determining an original three-dimensional coordinate value in the three-dimensional coordinate system when the face characteristic part is static;
step 300, reconstructing a virtual character three-dimensional model system in a three-dimensional coordinate system, and establishing a proportional relation between each facial feature part and the corresponding virtual feature part in the three-dimensional coordinate system so as to complete the initialization matching work of the facial feature parts and the virtual feature parts;
step 400, selecting active selection feature points of the facial feature part, and matching and selecting passive selection feature points of the virtual feature part according to the proportional relation between the facial feature part and the virtual feature part;
and 500, capturing the change of the three-dimensional coordinate values of the actively selected feature points in real time, and determining the change of the three-dimensional coordinate values of the passively selected feature points according to the proportional relation between the actively selected feature points and the passively selected feature points so as to track the animation image with the change of the facial expression.
Determining an influence range of the actively selected characteristic points by taking the actively selected characteristic points as a center, selecting a plurality of active radiation characteristic points in the influence range, determining the three-dimensional coordinate value change of the actively radiated characteristic points driven by the actively selected characteristic points by utilizing an influence range function and an influence amplitude function, and determining the passive radiation characteristic points and the three-dimensional coordinate value change of the passive radiation characteristic points by utilizing the passively selected characteristic points in the same way;
and storing original three-dimensional coordinate values of the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point by using a data pool tracking module, establishing a matching relation of the original three-dimensional coordinate values, and when the original three-dimensional coordinate values corresponding to the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point change, determining the corresponding relation among the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point in real time by using the original three-dimensional coordinate values as marks, so as to change the three-dimensional coordinate values of the passive selection feature point and the passive radiation feature point in real time according to a proportional relation.
Therefore, in summary, the present embodiment achieves the following two advantages:
first, in the present embodiment, facial feature portions, such as coordinates of structures such as eyes, eyebrows, mouth, nose, and ears, in a three-dimensional coordinate system and coordinates of virtual feature structures in the three-dimensional coordinate system are established, and the similarity between the facial feature portions and corresponding virtual feature structures is determined by the limitation of a face recognition technology, so as to ensure the similarity between a real face and a virtual character face, and in addition, a proportional relationship between each facial feature portion and corresponding virtual feature structure and a proportional relationship between the real eye and a virtual eye on an animation are respectively established, so that the accuracy of the animation character on emotion capture is improved, the expression amplitude of the animation virtual character is unified with the real facial expression, and therefore, the expression capture error of the animation virtual character is reduced.
Secondly, the embodiment is not only used for capturing the change of facial feature parts, but also captures the muscle change of the facial expression of the character in a mode of delineating the radiation range and setting the change range of different radiation range rings, so that the muscle change of the real facial expression change is synchronously displayed on the face of the animation virtual character, and the flexibility and the accuracy of the expression change of the animation character are improved.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.
Claims (10)
1. A face recognition system for expression capture, comprising:
the human face image acquisition system (1) comprises a camera shooting unit and a structured light acquisition unit, wherein the camera shooting unit is used for shooting a dynamic image of a human face, and the structured light acquisition unit is used for acquiring the numerical value and the change of an optical signal of a human face in the dynamic image through a structured light imaging principle to capture facial expressions;
the facial feature recognition module (9) is used for statically intercepting the dynamic image shot by the camera shooting unit and determining facial contour features and facial features through the facial image feature extraction unit;
the real three-dimensional face reconstruction system (2) is used for establishing a three-dimensional face model according to the structured light imaging principle of the structured light acquisition unit, simultaneously re-establishing a three-dimensional coordinate system of the three-dimensional face model, determining the distribution positions of a plurality of facial feature parts according to the values of optical signals of the human face acquired by the structured light acquisition unit, and refining the facial contour features and the facial features of the facial feature parts according to the facial contour features and the facial features acquired by the facial feature recognition module;
the virtual character three-dimensional model system (3) is used for establishing a three-dimensional virtual contour on the basis of a three-dimensional coordinate system of the real three-dimensional face reconstruction system (2), filling virtual feature parts in the three-dimensional virtual contour and determining the one-to-one correspondence between the face feature parts and the virtual feature parts;
the posture matching module (4) is used for selecting a plurality of active selection feature points of each facial feature part, establishing an active and passive relation of the facial feature parts actively pulling the virtual feature parts to move, and determining a proportional relation between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part;
the characteristic point association module (5) is used for determining a passive selection characteristic point corresponding to each active selection characteristic point in the virtual characteristic part according to the proportional relation between the three-dimensional coordinate value of each facial characteristic part and the three-dimensional coordinate value of the corresponding virtual characteristic part;
and the expression dynamic tracking module (6) is used for determining the three-dimensional coordinate change value of the passive selection characteristic point of the virtual characteristic part in the three-dimensional coordinate system according to the three-dimensional coordinate change value of the active selection characteristic point of the facial characteristic part in the three-dimensional coordinate system and the proportional relation so as to realize dynamic virtual of the dynamic human face image acquired by the human face image acquisition system (1).
2. A face recognition system for expression capture as claimed in claim 1, wherein: the characteristic point selecting and radiating module (7) is used for selecting active radiating characteristic points according to the influence range of each active selecting characteristic point and determining the functional relation between the active selecting characteristic points and the active radiating characteristic points with different influence ranges according to the driving capacity of the active radiating characteristic points with different distances from the active selecting characteristic points;
the feature point association module (5) determines passive radiation feature points corresponding to the active radiation feature points according to the proportional relationship between the three-dimensional coordinate value of each facial feature part and the three-dimensional coordinate value of the corresponding virtual feature part, and the functional relationship between the passive selection feature points and the passive radiation feature points is the same as the functional relationship between the active selection feature points and the active radiation feature points;
and the expression dynamic tracking module (6) determines the three-dimensional coordinate change value of the active radiation characteristic point according to a functional relation, and determines the three-dimensional coordinate change value of the passive radiation characteristic point of the virtual characteristic part in a three-dimensional coordinate system according to a proportional relation so as to accurately realize animation simulation.
3. A face recognition system for expression capture as claimed in claim 2, wherein: the human face image acquisition system (1) divides the optical signals into static signals corresponding to no action of the face and dynamic signals corresponding to work of the face, the real three-dimensional face reconstruction system (2) establishes a three-dimensional face model according to the static signals and determines the distribution positions of the facial feature parts according to the optical signal parameter difference of all the static signals, wherein,
the real three-dimensional face reconstruction system (2) re-establishes the three-dimensional coordinate system of the three-dimensional face model in the following way:
establishing a two-dimensional coordinate system by taking a perpendicular bisector of a connecting line of eye feature parts of the three-dimensional face model as a Y axis and taking a transverse connecting line which is arranged between ear feature parts and is directly crossed with the perpendicular bisector as an X axis;
taking the origin of a two-dimensional coordinate system as a starting point, making a vertical line inside the three-dimensional face model as a Z axis, and finally establishing a three-dimensional coordinate system;
and the three-dimensional coordinate values (x, y, z) of the active selection feature point and the active radiation feature point of the facial feature part are respectively the distance between the active selection feature point and a transverse connecting line intersected with the perpendicular bisector, the distance between the active selection feature point and the perpendicular bisector and the concave-convex position of the face corresponding to the depth value of the optical signal.
4. A face recognition system for expression capture as claimed in claim 3, wherein: and determining corresponding three-dimensional coordinate values (x ', y ', z ') according to the proportional relation and the functional relation by the passive selection characteristic points and the passive radiation characteristic points of the virtual characteristic part.
5. A face recognition system for expression capture as claimed in claim 3, wherein: the real three-dimensional face reconstruction system (2) and the virtual character three-dimensional model system (3) are integrally nested in a second three-dimensional coordinate system, and the second three-dimensional coordinate system is used for realizing head actions of the real three-dimensional face reconstruction system (2) and the virtual character three-dimensional model system (3).
6. A face recognition system for expression capture as claimed in claim 3, wherein: the system also comprises a data pool tracking module (8), when the optical signal is a static signal and the attitude matching module (4) and the characteristic point correlation module (5) finish the initialization of proportional relation and functional relation, the data pool tracking module (8) is used for saving the original three-dimensional coordinate values of the active selection characteristic points and the active radiation characteristic points, and the original three-dimensional coordinate values of the passively selected feature points and the passively radiated feature points, the data pool tracking module (8) is used for changing the original three-dimensional coordinate values of the active selection characteristic points and the active radiation characteristic points in real time when the optical signals are dynamic signals representing face work, and changing the three-dimensional coordinate values of the passively selected feature points and the passively radiated feature points in real time by using the matching relationship between the original three-dimensional coordinate values.
7. A face recognition system for expression capture as claimed in claim 2, wherein: the characteristic point selection radiation module (7) is used for selecting active radiation characteristic points corresponding to each active selection characteristic point as a dynamic signal, the influence range of the active selection characteristic points as the dynamic signal is in direct proportion to the change value of the active selection characteristic points x and y, and the influence range of the active selection characteristic points determines the coordinate value change amplitude of the active selection characteristic points through a direct proportion function;
and the variation amplitude of the x and y coordinate values of different active radiation characteristic points is inversely proportional to the distance between the active radiation characteristic point and the active selection characteristic point, and the variation amplitude of the x1 and y1 coordinate values of the active radiation characteristic point is determined by the distance between the active radiation characteristic point and the active selection characteristic point through an inverse proportion function.
8. A face recognition system for expression capture as claimed in claim 7, wherein: when the influence ranges of two adjacent active selection characteristic points are overlapped, the change range of the coordinate values of x1 and y1 of the active radiation characteristic points is equal to the superposition result of inverse proportion function values corresponding to the two active selection characteristic points.
9. A face recognition method for a face recognition system for expression capture according to any one of claims 1 to 7, comprising the steps of:
step 100, acquiring a dynamic image of a human face by using a camera shooting unit, acquiring facial contour features and facial features by using a human face recognition technology, capturing facial expression changes of the dynamic image of the human face by using a structured light method, and determining distribution positions of different facial feature parts in a three-dimensional face model;
200, reconstructing a three-dimensional coordinate system of the three-dimensional face model relative to the face characteristic part, and determining an original three-dimensional coordinate value in the three-dimensional coordinate system when the face characteristic part is static;
step 300, reconstructing a virtual character three-dimensional model system in the three-dimensional coordinate system, and establishing a proportional relation between each facial feature part and the corresponding virtual feature part in the three-dimensional coordinate system so as to complete the initialization matching work of the facial feature parts and the virtual feature parts;
step 400, selecting active selection feature points of the facial feature parts, and matching and selecting passive selection feature points of the virtual feature parts according to the proportional relation between the facial feature parts and the virtual feature parts;
step 500, capturing the change of the three-dimensional coordinate value of the active selection feature point in real time, and determining the change of the three-dimensional coordinate value of the passive selection feature point according to the proportional relation between the active selection feature point and the passive selection feature point so as to enable the animation image to track the facial expression change.
10. The method according to claim 9, wherein the influence range of the actively selected feature point is determined with the actively selected feature point as a center, a plurality of actively radiating feature points are selected within the influence range, a three-dimensional coordinate value change of the actively radiating feature point driven by the actively selected feature point is determined by using an influence range function and an influence amplitude function, and a passive radiating feature point and a three-dimensional coordinate value change of the passive radiating feature point are determined by using the passively selected feature point in the same manner;
and storing original three-dimensional coordinate values of the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point by using a data pool tracking module, establishing a matching relation of the original three-dimensional coordinate values, and determining the corresponding relation among the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point in real time by taking the original three-dimensional coordinate values as marks when the original three-dimensional coordinate values corresponding to the active selection feature point, the active radiation feature point, the passive selection feature point and the passive radiation feature point are changed, so as to change the three-dimensional coordinate values of the passive selection feature point and the passive radiation feature point in real time according to a proportional relation.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011425666.3A CN112232310B (en) | 2020-12-09 | 2020-12-09 | Face recognition system and method for expression capture |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011425666.3A CN112232310B (en) | 2020-12-09 | 2020-12-09 | Face recognition system and method for expression capture |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112232310A CN112232310A (en) | 2021-01-15 |
| CN112232310B true CN112232310B (en) | 2021-03-12 |
Family
ID=74124696
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011425666.3A Active CN112232310B (en) | 2020-12-09 | 2020-12-09 | Face recognition system and method for expression capture |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112232310B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112929619B (en) * | 2021-02-03 | 2022-04-19 | 广州工程技术职业学院 | Tracking display structure of facial feature points in animation character |
| CN112954205A (en) * | 2021-02-04 | 2021-06-11 | 重庆第二师范学院 | Image acquisition device applied to pedestrian re-identification system |
| TWI814318B (en) | 2021-04-02 | 2023-09-01 | 美商索尼互動娛樂有限責任公司 | Method for training a model using a simulated character for animating a facial expression of a game character and method for generating label values for facial expressions of a game character using three-imensional (3d) image capture |
| CN113313020B (en) * | 2021-05-27 | 2023-04-07 | 成都威爱新经济技术研究院有限公司 | Unmarked facial expression capturing method and system based on virtual human |
| CN115393486B (en) * | 2022-10-27 | 2023-03-24 | 科大讯飞股份有限公司 | Method, device and equipment for generating virtual image and storage medium |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100458831C (en) * | 2006-06-01 | 2009-02-04 | 北京中星微电子有限公司 | Human face model training module and method, human face real-time certification system and method |
| CN106327482B (en) * | 2016-08-10 | 2019-01-22 | 东方网力科技股份有限公司 | A kind of method for reconstructing and device of the facial expression based on big data |
| CN108734757A (en) * | 2017-04-14 | 2018-11-02 | 北京佳士乐动漫科技有限公司 | A kind of method that sound captures realization 3 D human face animation with expression |
| KR20240027845A (en) * | 2018-04-18 | 2024-03-04 | 스냅 인코포레이티드 | Augmented expression system |
| CN108681719A (en) * | 2018-05-21 | 2018-10-19 | 北京微播视界科技有限公司 | Method of video image processing and device |
-
2020
- 2020-12-09 CN CN202011425666.3A patent/CN112232310B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN112232310A (en) | 2021-01-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112232310B (en) | Face recognition system and method for expression capture | |
| CN111710036B (en) | Method, device, equipment and storage medium for constructing three-dimensional face model | |
| EP4383193A1 (en) | Line-of-sight direction tracking method and apparatus | |
| CN110363133B (en) | Method, device, equipment and storage medium for sight line detection and video processing | |
| KR102658303B1 (en) | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking | |
| CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
| US9779512B2 (en) | Automatic generation of virtual materials from real-world materials | |
| CN113366491B (en) | Eyeball tracking method, device and storage medium | |
| Chen et al. | 3D face reconstruction and gaze tracking in the HMD for virtual interaction | |
| KR20200066371A (en) | Event camera-based deformable object tracking | |
| CN108985172A (en) | A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light | |
| WO2019140945A1 (en) | Mixed reality method applied to flight simulator | |
| Watanabe et al. | Extended dot cluster marker for high-speed 3d tracking in dynamic projection mapping | |
| Ohya et al. | Real-time reproduction of 3D human images in virtual space teleconferencing | |
| KR101759188B1 (en) | the automatic 3D modeliing method using 2D facial image | |
| CN106708270A (en) | Display method and apparatus for virtual reality device, and virtual reality device | |
| CN110717391A (en) | Height measuring method, system, device and medium based on video image | |
| US20240257419A1 (en) | Virtual try-on via warping and parser-based rendering | |
| JPWO2006049147A1 (en) | Three-dimensional shape estimation system and image generation system | |
| Malleson et al. | Rapid one-shot acquisition of dynamic VR avatars | |
| CN116863044A (en) | Face model generation method and device, electronic equipment and readable storage medium | |
| CN108537103B (en) | Living body face detection method and device based on pupil axis measurement | |
| CN106909904A (en) | It is a kind of based on the face front method that can learn Deformation Field | |
| US9792715B2 (en) | Methods, systems, and computer readable media for utilizing synthetic animatronics | |
| US20240054765A1 (en) | Information processing method and apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | ||
| CP03 | Change of name, title or address |
Address after: 701, 7th floor, and 801, 8th floor, Building 1, Courtyard 8, Gouzitou Street, Changping District, Beijing, 102200 Patentee after: Zhongying Nian Nian (Beijing) Technology Co.,Ltd. Country or region after: China Address before: 102209 32 Wangfu street, Beiqijia Town, Changping District, Beijing Patentee before: China Film annual (Beijing) culture media Co.,Ltd. Country or region before: China |