[go: up one dir, main page]

CN119380394A - An algorithm for facial expression recognition and imitation of virtual characters or robots - Google Patents

An algorithm for facial expression recognition and imitation of virtual characters or robots Download PDF

Info

Publication number
CN119380394A
CN119380394A CN202411953666.9A CN202411953666A CN119380394A CN 119380394 A CN119380394 A CN 119380394A CN 202411953666 A CN202411953666 A CN 202411953666A CN 119380394 A CN119380394 A CN 119380394A
Authority
CN
China
Prior art keywords
robot
facial
head
angle
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411953666.9A
Other languages
Chinese (zh)
Other versions
CN119380394B (en
Inventor
袁野
任钦泽
刘娜
李清都
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhuoyide Robot Co ltd
Original Assignee
Shanghai Zhuoyide Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhuoyide Robot Co ltd filed Critical Shanghai Zhuoyide Robot Co ltd
Priority to CN202411953666.9A priority Critical patent/CN119380394B/en
Publication of CN119380394A publication Critical patent/CN119380394A/en
Application granted granted Critical
Publication of CN119380394B publication Critical patent/CN119380394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种用于虚拟人物或机器人面部表情识别模仿算法,本发明属于图像识别模仿领域,其中用于机器人面部表情识别模仿算法,主要包括:将裁剪后的人脸图像输入到深度模型中,通过深度模型识别人脸图像中面部关键点,以及得出头部位姿参数,然后由面部关键点来预测出面部表情的值;计算机器人头部位姿:通过头部位姿参数计算旋转矩阵,将旋转矩阵转变为机器人头部位姿的欧拉角;根据预测出面部表情的值映射成为机器人面部控制舵机的转动角度;从而控制机器人运转。本发明能够得到动态幅度较小的微表情的机器人表情模仿数据,并且便于从大量视频表情中准确映射到机器人实体中得到大量机器人识别模仿数据,且联动整体的头部位姿,使得表情表达更加准确。

An algorithm for imitating facial expressions of virtual characters or robots, the present invention belongs to the field of image recognition and imitation, wherein the algorithm for imitating facial expressions of robots mainly includes: inputting a cropped face image into a depth model, identifying key facial points in the face image through the depth model, and obtaining head posture parameters, and then predicting the value of facial expression from the key facial points; calculating the head posture of the robot: calculating the rotation matrix through the head posture parameters, converting the rotation matrix into the Euler angle of the robot head posture; mapping the predicted value of facial expression into the rotation angle of the robot facial control servo; thereby controlling the operation of the robot. The present invention can obtain robot expression imitation data of micro-expressions with a small dynamic amplitude, and is convenient for accurately mapping a large amount of robot recognition and imitation data from a large number of video expressions to robot entities, and linking the overall head posture to make the expression of expression more accurate.

Description

Facial expression recognition imitation algorithm for virtual character or robot
Technical Field
The invention relates to the technical field of image recognition imitation, in particular to a facial expression recognition imitation algorithm for a virtual character or a robot.
Background
With the development of artificial intelligence technology, the intelligent human-computer interaction technology has higher and higher interest, is widely loved by users, but the traditional intelligent human-computer interaction technology is limited to voice and image interaction, has difficult to meet the diversified demands of users, and is an emerging expression recognition and interaction technology and eye tracking interaction technology, but the technologies all need a carrier to better present, the traditional presentation carrier is generally an AI virtual character, but the expression robot simulating the human-computer interaction is undoubtedly the best carrier for human-computer interaction at present, and not only can diversified integrated presentation of various human-computer interaction technologies, but also is the most natural interaction carrier for users. Therefore, the mature expression robot product can be widely applied to the fields of education assistance, medical companion nursing, customer guiding service, entertainment demonstration, social contact and the like in the future.
However, the present expression robot technology also needs a large amount of expression recognition simulation data to make the robot expression interaction more natural and vivid, in the prior art, the patent document with the bulletin number of CN116597484a discloses a real-time expression simulation method and device for a robot, the method can obtain key point positions on all faces in an image by recognizing the face image to be detected, determine the face to be simulated in the face image to be detected according to the key point positions, the current frame of the face image to be detected is compared with the previous frame in real time, the transformation trend and transformation proportion of the face to be imitated can be obtained, the transformation trend is used for adjusting the head rotation angle of the robot, the transformation proportion is used for calculating the rotation angles and the running time of all steering engines of the robot, so that the steering engines of the robot can control the action of the robot, and expression imitation is carried out on the face to be imitated. Although the above scheme can provide the robot expression imitation, it is difficult to provide a large amount of expression recognition imitation data by adopting a real-time expression imitation mode, and the robot expression imitation data of the micro-expression with small dynamic amplitude is difficult to obtain by acquiring the transformation trend and transformation proportion of all key point coordinates of the face to be imitated, so that the robot can be acquired by adjusting the robot mode according to the transformation trend only in the expression with large dynamic amplitude, but the scheme is difficult to reflect the micro-expression with small dynamic amplitude to the robot for imitation action.
Disclosure of Invention
The invention provides the robot expression simulation data capable of obtaining the micro-expressions with smaller dynamic amplitude, which is convenient for obtaining a large amount of robot identification simulation data by accurately mapping a large amount of video expressions into robot entities, and linking the head pose of the whole body, so that the expression is more accurate, and simultaneously, the robot expression simulation data can be associated with a virtual character to perform synchronous expression identification simulation, and a virtual character or robot facial expression identification simulation algorithm for a comparison reference observation test can be performed.
The specific technical scheme is as follows:
there is provided a facial expression recognition simulation algorithm for a robot, comprising the steps of:
S1, circularly acquiring a face image, and carrying out cutting pretreatment on the face image;
S2, inputting the cut face image into a depth model mediapipe, identifying facial key points in the face image through the depth model mediapipe, obtaining head pose parameters, and predicting the value of the facial expression through the facial key points;
S3, calculating the position and the pose of the robot head, namely establishing three basic rotation matrixes through head pose parameters, wherein the product of the three basic rotation matrixes is a rotation matrix, and then converting the rotation matrix into three Euler angles corresponding to the position and the pose of the robot head;
S4, mapping the position of the predicted facial expression value in the range of the predicted original data set into a target data set of the robot facial control steering engine, so as to obtain the rotating angle of the robot facial control steering engine;
And S5, finally, controlling the pose of the robot head and the face of the robot to control the steering engine to rotate according to the Euler angle and the rotation angle.
Further, in S3, the calculation of the robot head pose is specifically:
according to the depth model mediapipe, head pose parameters are alpha, beta and gamma respectively, and three basic rotation matrixes are established as follows:
Then, the product of the three basic rotation matrices is a rotation matrix rm= And finally, converting into three Euler angles corresponding to the pose of the robot head according to the rotation matrix.
Further, in S1, face images are circularly acquired by the image acquisition device.
Further, the values of the euler angle and the rotation angle obtained in S3 and S4 are subjected to filtering processing.
Further, the filtering processing mode is mean filtering processing, the average value is obtained by averaging the angle values generated in the adjacent time of the occurrence time of the angle value of the current processing, and the obtained average value replaces the angle value of the current processing to adjust and control the position and the posture of the robot head or the rotation of the robot face to control a steering engine, specifically:
Wherein, the obtained In order to obtain a stable angle value after the filtering process,For the currently processed angle value(s),Is based onThe size of the selected neighborhood window is chosen,Is thatAngle values obtained in adjacent times.
Further, whereinThe value is 5-10.
Further, 52 facial key points in the face image are identified through the depth model mediapipe, 3 head pose parameters are obtained, and then the values of the facial expression predicted by the facial key points are also 52.
The method comprises the steps of selecting 52 morphological keys and 3 head morphological poses corresponding to the face of a person in the blender software, binding the 52 morphological keys and the 3 head morphological poses corresponding to the head of the person, then forming a mapping relation between 3 head pose parameters obtained by identifying a face image through a depth model mediapipe and 3 head morphological poses in the blender software, and forming a mapping relation between 52 facial expression values predicted by facial key points and 52 morphological keys selected in the blender software, so that binding control of the 52 morphological keys and the 3 head morphological poses of the facial expression of the person is realized.
The method has the advantages that the method can obtain the robot expression simulation data of the micro-expressions with small dynamic amplitude, is convenient for obtaining a large amount of robot identification simulation data by accurately mapping a large amount of video expressions into a robot entity, and links the whole head pose, so that the expression is more accurate, and can be associated with virtual characters to perform synchronous expression identification simulation, joint interaction or comparison reference observation test.
Drawings
FIG. 1 is a flow chart of the whole method of the invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making clear and defining the scope of the present invention.
Examples:
As shown in fig. 1, a facial expression recognition simulation algorithm for a robot includes the steps of:
S1, circularly acquiring face images through image acquisition equipment, directly extracting through video pictures, and cutting and preprocessing the face images.
S2, inputting the cut face image into a depth model mediapipe (an on-off source cross-platform framework developed by Google is mainly used for constructing a multimedia processing and machine learning application program), identifying facial key points in the face image through the depth model mediapipe, obtaining head pose parameters, and predicting the value of the facial expression through the facial key points.
The depth model mediapipe identifies 52 facial key points in the face image, 3 head pose parameters are obtained, and then the facial expression values predicted by the facial key points are also 52.
And S3, calculating the head pose of the robot, namely establishing three basic rotation matrixes through head pose parameters, wherein the product of the three basic rotation matrixes is the rotation matrix, then converting the rotation matrix into three Euler angles corresponding to the head pose of the robot, and linking the head pose of the whole robot, so that expression is more accurate, and accurate mapping of values of subsequent facial expressions is facilitated.
The calculating of the robot head pose in S3 is specifically:
according to the depth model mediapipe, head pose parameters are alpha, beta and gamma respectively, and three basic rotation matrixes are established as follows:
Then, the product of the three basic rotation matrices is a rotation matrix rm= And finally, converting into three Euler angles corresponding to the pose of the robot head according to the rotation matrix.
The euler angle mode for converting the rotation matrix into three corresponding robot head poses is specifically and generally as follows:
if the obtained rotation matrix is set as follows:
Calculating the panning rotation angle The method comprises the following steps:
If (if) When it is needed to determine according to the specific rotation conditionIs a value of (2).
Calculating pitch angleThe method comprises the following steps:
The value range is
Calculating the rotation angle of the deflection headThe method comprises the following steps:
If (if) When it is needed to determine according to the specific rotation conditionIs a value of (2).
If the rotation order is different, the formulas for calculating the Euler angles will also be different, and the calculation order is Z-Y-X order. If the calculation process and the formula are changed for the X-Y-Z sequence, and in practical application, the problem of universal lock exists due to the Euler angle, namely, when the pitch angle isThis results in a loss of freedom, which requires special attention in switching and use.
S4, mapping the position of the predicted facial expression value within the range of the predicted original data set to a target data set of the robot facial control steering engine to obtain the rotating angle of the robot facial control steering engine, wherein the minimum and maximum values of the predicted facial expression value of the facial image are respectively 0 and 1, i.e. the limiting value range isAnd assuming that the rotation limit range of the steering engine is 0-90 degrees, when the value of the actually predicted facial expression is 0.1, the rotation angle of the robot facial control steering engine is 9 degrees, and when the value of the actually predicted facial expression is 0.5, the rotation angle of the robot facial control steering engine is 45 degrees, and so on. The mapping mode can obtain the robot expression imitation data of the micro-expression with smaller dynamic amplitude, has low requirements on picture quality, is convenient for extracting face images from a large number of existing rich expressions such as videos or film and television dramas, accurately maps the face images into a robot entity to obtain a large number of robot identification imitation data, and can accurately extract and express facial expressions based on the fact that the head pose is determined firstly.
And (3) respectively obtaining values of Euler angles and rotation angles in the S3 and the S4, and performing filtering treatment to avoid shaking caused by excessively intense movement amplitude of the robot, wherein the filtering treatment mode is mean filtering treatment, and the average value is obtained by averaging the angle values generated in adjacent time of the occurrence time of the angle value currently processed, so that the obtained average value replaces the angle value currently processed to adjust and control the pose of the robot head or the rotation of the robot face to control a steering engine, and specifically comprises the following steps:
Wherein, the obtained In order to obtain a stable angle value after the filtering process,For the currently processed angle value(s),Is based onSelected neighborhood window size, whichTake the value of 5-10, taking the most common 28 frames per second for video processing as an exampleThe value is 5, and a certain delay is caused because the value is taken to the subsequent time, but the delay is not excessive, otherwise obvious slow feeling is easily caused to man-machine interaction,Is thatAngle values obtained in adjacent times.
And S5, finally, controlling the pose of the robot head and the face of the robot to control the steering engine to rotate according to the Euler angle and the rotation angle.
The invention also provides a facial expression recognition simulation algorithm for the virtual character, which is characterized in that 52 morphological keys and 3 head morphological poses of the face of the virtual character are selected from the blender software (3D computer graphic software) and are respectively bound with corresponding positions of the head of the virtual character, then 3 head pose parameters obtained by recognizing the face image through a depth model mediapipe and 3 head morphological poses in the blender software form a mapping relation, the mapping relation is simpler and is generally direct input, the values of 52 facial expressions predicted by facial key points and 52 morphological keys selected from the blender software form a mapping relation, so that the binding control of the 52 morphological keys and the 3 head morphological poses of the facial expression of the virtual character is realized, the virtual character and the robot entity participate in the expression recognition simulation in parallel at the same time, and the virtual character can be associated to perform synchronous expression recognition simulation, joint interaction or contrast observation test.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A facial expression recognition simulation algorithm for a robot, comprising the steps of:
S1, circularly acquiring a face image, and carrying out cutting pretreatment on the face image;
S2, inputting the cut face image into a depth model mediapipe, identifying facial key points in the face image through the depth model mediapipe, obtaining head pose parameters, and predicting the value of the facial expression through the facial key points;
S3, calculating the position and the pose of the robot head, namely establishing three basic rotation matrixes through head pose parameters, wherein the product of the three basic rotation matrixes is a rotation matrix, and then converting the rotation matrix into three Euler angles corresponding to the position and the pose of the robot head;
S4, mapping the position of the predicted facial expression value in the range of the predicted original data set into a target data set of the robot facial control steering engine, so as to obtain the rotating angle of the robot facial control steering engine;
And S5, finally, controlling the pose of the robot head and the face of the robot to control the steering engine to rotate according to the Euler angle and the rotation angle.
2. The facial expression recognition simulation algorithm for a robot according to claim 1, wherein the calculation of the robot head pose in S3 is specifically:
according to the depth model mediapipe, head pose parameters are alpha, beta and gamma respectively, and three basic rotation matrixes are established as follows:
Then, the product of the three basic rotation matrices is a rotation matrix rm= And finally, converting into three Euler angles corresponding to the pose of the robot head according to the rotation matrix.
3. The facial expression recognition simulation algorithm for a robot according to claim 1 or 2, wherein the facial image is circularly acquired by the image acquisition apparatus in S1.
4. The facial expression recognition simulation algorithm for a robot according to claim 1 or 2, wherein the values of the euler angle and the rotation angle are obtained in S3 and S4, respectively, and a filtering process is performed.
5. The algorithm according to claim 4, wherein the filtering processing mode is mean filtering processing, and the average value obtained by averaging the angle values generated in adjacent time of the occurrence time of the angle value currently processed is used for replacing the angle value currently processed to adjust and control the robot head pose or the robot face to control the steering engine to rotate, specifically:
Wherein, the obtained In order to obtain a stable angle value after the filtering process,For the currently processed angle value(s),Is based onThe size of the selected neighborhood window is chosen,Is thatAngle values obtained in adjacent times.
6. The facial expression recognition simulation algorithm for a robot of claim 5, whereinThe value is 5-10.
7. The method according to any one of claims 1,2, 5, and 6, wherein 52 facial key points in the face image are identified by the depth model mediapipe, 3 head pose parameters are obtained, and then the value of predicting the facial expression from the facial key points is also 52.
8. A facial expression recognition simulation algorithm for a virtual character is characterized in that 52 morphological keys and 3 head morphological poses corresponding to the face of the character are selected in the blender software, the facial expression recognition simulation algorithm is bound with corresponding positions of the head of the virtual character respectively, then 3 head pose parameters obtained by recognizing a face image through a depth model mediapipe are mapped with 3 head morphological poses in the blender software, and the values of the 52 facial expressions predicted by facial key points are mapped with the 52 morphological keys selected in the blender software.
CN202411953666.9A 2024-12-27 2024-12-27 A method for imitating facial expression recognition of virtual characters or robots Active CN119380394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411953666.9A CN119380394B (en) 2024-12-27 2024-12-27 A method for imitating facial expression recognition of virtual characters or robots

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411953666.9A CN119380394B (en) 2024-12-27 2024-12-27 A method for imitating facial expression recognition of virtual characters or robots

Publications (2)

Publication Number Publication Date
CN119380394A true CN119380394A (en) 2025-01-28
CN119380394B CN119380394B (en) 2025-06-13

Family

ID=94336115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411953666.9A Active CN119380394B (en) 2024-12-27 2024-12-27 A method for imitating facial expression recognition of virtual characters or robots

Country Status (1)

Country Link
CN (1) CN119380394B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020078440A1 (en) * 2018-10-18 2020-04-23 北京中科虹霸科技有限公司 Apparatus for collecting high-definition facial images and method for automatic pitch adjustment of camera gimbal
CN113146620A (en) * 2021-03-16 2021-07-23 南京工程学院 Binocular vision-based double-arm cooperative robot system and control method
CN114758399A (en) * 2022-04-28 2022-07-15 河南中原动力智能制造有限公司 Expression control method, device, equipment and storage medium of a bionic robot
CN115205926A (en) * 2022-06-15 2022-10-18 中国科学技术大学 A lightweight and robust face alignment method and system based on multi-task learning
CN115648203A (en) * 2022-09-15 2023-01-31 河海大学 A method for real-time mirroring behavior of robots based on lightweight neural network
CN115830284A (en) * 2022-11-14 2023-03-21 湖南快乐阳光互动娱乐传媒有限公司 Virtual human face driving method and device, storage medium and electronic equipment
CN117765157A (en) * 2023-03-30 2024-03-26 聚好看科技股份有限公司 Virtual character expression driving method and reconstruction equipment
WO2024113882A1 (en) * 2022-11-30 2024-06-06 上海哔哩哔哩科技有限公司 Facial expression control method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020078440A1 (en) * 2018-10-18 2020-04-23 北京中科虹霸科技有限公司 Apparatus for collecting high-definition facial images and method for automatic pitch adjustment of camera gimbal
CN113146620A (en) * 2021-03-16 2021-07-23 南京工程学院 Binocular vision-based double-arm cooperative robot system and control method
CN114758399A (en) * 2022-04-28 2022-07-15 河南中原动力智能制造有限公司 Expression control method, device, equipment and storage medium of a bionic robot
CN115205926A (en) * 2022-06-15 2022-10-18 中国科学技术大学 A lightweight and robust face alignment method and system based on multi-task learning
CN115648203A (en) * 2022-09-15 2023-01-31 河海大学 A method for real-time mirroring behavior of robots based on lightweight neural network
CN115830284A (en) * 2022-11-14 2023-03-21 湖南快乐阳光互动娱乐传媒有限公司 Virtual human face driving method and device, storage medium and electronic equipment
WO2024113882A1 (en) * 2022-11-30 2024-06-06 上海哔哩哔哩科技有限公司 Facial expression control method and apparatus
CN117765157A (en) * 2023-03-30 2024-03-26 聚好看科技股份有限公司 Virtual character expression driving method and reconstruction equipment

Also Published As

Publication number Publication date
CN119380394B (en) 2025-06-13

Similar Documents

Publication Publication Date Title
CN110599573B (en) Method for realizing real-time human face interactive animation based on monocular camera
JP7476428B2 (en) Image line of sight correction method, device, electronic device, computer-readable storage medium, and computer program
CN110490896B (en) Video frame image processing method and device
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
CN113807265B (en) A Diverse Face Image Synthesis Method and System
CN108363973A (en) A kind of unconfined 3D expressions moving method
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN110490959A (en) Three dimensional image processing method and device, virtual image generation method and electronic equipment
US11443473B2 (en) Systems and methods for generating a skull surface for computer animation
US20250014149A1 (en) Image synthesis method and apparatus, storage medium, and electronic device
CN115049016A (en) Model driving method and device based on emotion recognition
CN116977506A (en) Model action redirection method, device, electronic equipment and storage medium
CN115760552A (en) Face image makeup migration method and system based on image makeup migration network
CN116258738A (en) Image processing method, device, electronic device and storage medium
CN111105487B (en) A face synthesis method and device in a virtual teacher system
CN112116673B (en) Method, system and electronic device for generating virtual human body image based on structural similarity under posture guidance
Menzel et al. Automated blendshape personalization for faithful face animations using commodity smartphones
CN119380394B (en) A method for imitating facial expression recognition of virtual characters or robots
CN118675385A (en) Virtual ophthalmic ICL lens implantation surgery training method and system
Li The influence of digital twins on the methods of film and television creation
CN117593178A (en) Virtual fitting method based on feature guidance
CN118230394A (en) A facial expression driving method and device based on 3D key points of human face
Miura et al. SynSLaG: Synthetic sign language generator
CN117409148B (en) Automatic face modeling method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant