[go: up one dir, main page]

CN114078280B - Motion capture method, device, electronic device and storage medium - Google Patents

Motion capture method, device, electronic device and storage medium Download PDF

Info

Publication number
CN114078280B
CN114078280B CN202010801654.XA CN202010801654A CN114078280B CN 114078280 B CN114078280 B CN 114078280B CN 202010801654 A CN202010801654 A CN 202010801654A CN 114078280 B CN114078280 B CN 114078280B
Authority
CN
China
Prior art keywords
whole body
motion
video data
estimation result
posture estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010801654.XA
Other languages
Chinese (zh)
Other versions
CN114078280A (en
Inventor
徐屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010801654.XA priority Critical patent/CN114078280B/en
Publication of CN114078280A publication Critical patent/CN114078280A/en
Application granted granted Critical
Publication of CN114078280B publication Critical patent/CN114078280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开关于一种动作捕捉方法、装置、电子设备及存储介质,所述方法包括:获取全身的姿态估计结果;根据全身的姿态估计结果驱动三维模型;在播放初始全身动作的过程中实时采集细节部位的目标动作视频数据;识别细节部位的目标动作视频数据,得到细节部位的目标姿态估计结果;根据细节部位的目标姿态估计结果以及其他部位的姿态估计结果驱动三维模型,得到三维模型的目标全身动作。该方法一方面基于交互式操作流程,使用户能够将全身的动作捕捉过程拆分成多个部位、多次采集的过程,通过实时预览的方式实现动作同步,从而可以得到高质量的全身动作信息;另一方面,基于纯视觉方法进行动作捕捉,实现了在较低的成本内进行高精度的动作采集。

The present disclosure relates to a motion capture method, device, electronic device and storage medium, the method comprising: obtaining a whole-body posture estimation result; driving a three-dimensional model according to the whole-body posture estimation result; collecting target motion video data of detail parts in real time during the playback of the initial whole-body motion; identifying the target motion video data of detail parts to obtain the target posture estimation result of the detail parts; driving the three-dimensional model according to the target posture estimation result of the detail parts and the posture estimation results of other parts to obtain the target whole-body motion of the three-dimensional model. On the one hand, the method is based on an interactive operation process, allowing the user to split the whole-body motion capture process into multiple parts and multiple acquisition processes, and realize motion synchronization through real-time preview, so as to obtain high-quality whole-body motion information; on the other hand, based on a pure visual method for motion capture, high-precision motion acquisition is achieved at a relatively low cost.

Description

Motion capture method, motion capture device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a motion capture method, a motion capture device, electronic equipment and a storage medium.
Background
The motion capture means capturing and collecting human body motion, gestures and expressions, fully presents the performance of actors, and has important application in CG (Computer Graphics) animation film production.
Motion capture can be implemented in several ways, optical motion capture, inertial motion capture, and markerless visual motion capture. Optical motion capture is a motion capture scheme with higher capture quality in the prior art, and is a common means in 3D (three-dimensional) animation film production flow. Inertial motion capture is widely used in semi-professional content production such as virtual idol. For optical motion capture and inertial motion capture, a performer is usually required to wear professional clothing, and professional acquisition equipment is adopted for motion acquisition, so that the problem of high cost exists. The unmarked visual motion capture mode solves the problem of high motion capture cost, and a performer can adopt a common mobile phone or a computer with a common camera to perform motion acquisition. However, in the related art, the unlabeled visual motion capturing method cannot obtain clear whole-body motion because the number of pixels of the picture occupied by fingers and faces is too small and the picture is too blurred.
Disclosure of Invention
The present disclosure provides a motion capture method, a motion capture device, an electronic apparatus, and a storage medium, so as to at least solve the problem that motion capture in the related art cannot obtain clear whole body motion. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a motion capture method, including:
Acquiring a whole body posture estimation result, wherein the whole body posture estimation result comprises a detailed part posture estimation result and other part posture estimation results except the detailed part, and the detailed part is a part of the whole body;
Driving the three-dimensional model according to the posture estimation result of the whole body to obtain the initial whole body action of the three-dimensional model;
Acquiring target motion video data of the detail part in real time in the process of playing the initial whole body motion;
identifying target motion video data of the detail part to obtain a target attitude estimation result of the detail part;
And driving the three-dimensional model together according to the target posture estimation results of the detail parts and the posture estimation results of other parts to obtain the target whole body action of the three-dimensional model.
In one embodiment, the number of detail parts comprises a plurality of detail parts, and the method comprises the steps of collecting target action video data of the detail parts in real time in the process of playing the initial whole-body action, wherein the target action video data comprise the following steps:
During the process of playing the initial whole body action, the target action video data of each detail part is collected independently;
in this embodiment, identifying target motion video data of a detail part to obtain a target pose estimation result of the detail part includes:
and identifying the target action video data of each detail part to obtain a target posture estimation result of each detail part.
In one embodiment, driving the three-dimensional model together according to the target posture estimation result of the detail part and the posture estimation result of other parts to obtain the target whole body action of the three-dimensional model, including:
In the process of obtaining the target attitude estimation result of one detail part, driving the three-dimensional model together in real time according to the obtained target attitude estimation result of the detail part and the attitude estimation results of other parts until the target attitude estimation results of all the detail parts are obtained;
And driving the three-dimensional model together according to the target attitude estimation results of all the detail parts and the attitude estimation results of other parts to obtain the target whole body action of the three-dimensional model.
In one embodiment, obtaining a posture estimation result of the whole body includes:
Acquiring pre-stored initial motion video data of the whole body;
And identifying pre-stored initial motion video data of the whole body to obtain a whole body posture estimation result.
In one embodiment, obtaining a posture estimation result of the whole body includes:
collecting initial motion video data of the whole body in real time;
And identifying initial motion video data of the whole body acquired in real time, and obtaining a posture estimation result of the whole body.
In one embodiment, after obtaining the target whole-body motion of the three-dimensional model, the method further comprises:
When a re-acquisition instruction is received and the part category information included in the re-acquisition instruction is the whole body, acquiring updated action video data of the whole body, wherein the initial action video data of the whole body and the updated action video data of the whole body are obtained through the performance action of the same whole body;
identifying the updated motion video data of the whole body to obtain an updated posture estimation result of the whole body;
And driving the three-dimensional model according to the updated posture estimation result of the whole body to obtain updated initial whole body actions of the three-dimensional model.
In one embodiment, after obtaining the target whole-body motion of the three-dimensional model, the method further comprises:
When a re-acquisition instruction is received and the part category information included in the re-acquisition instruction is a detail part, playing an initial whole body action;
Acquiring updated motion video data of the detail part in real time in the process of playing the initial whole body motion, wherein the updated motion video data of the detail part and target motion video data of the detail part are obtained through the performance motion of the same set of detail part;
Identifying updating action video data of the detail part to obtain an updating posture estimation result of the detail part;
and driving the three-dimensional model together according to the updated posture estimation results of the detail parts and the posture estimation results of other parts to obtain updated whole body actions of the three-dimensional model.
In one embodiment, the detail region includes at least one of a hand and a face.
According to a second aspect of embodiments of the present disclosure, there is provided a motion capture device, comprising:
An acquisition module configured to perform acquisition of a posture estimation result of a whole body, the posture estimation result of the whole body including a posture estimation result of a detail part and a posture estimation result of other parts than the detail part, the detail part being a part of the whole body;
The driving module is configured to execute driving of the three-dimensional model according to the posture estimation result of the whole body to obtain an initial whole body action of the three-dimensional model;
The acquisition module is configured to acquire target action video data of the detail part in real time in the process of playing the initial whole body action;
the detail part identification module is configured to execute target action video data for identifying the detail part to obtain a target posture estimation result of the detail part;
the driving module is further configured to perform driving of the three-dimensional model together according to the target posture estimation result of the detail part and the posture estimation results of other parts, so as to obtain a target whole body action of the three-dimensional model.
In one embodiment, the number of detail parts comprises a plurality of detail parts, an acquisition module, a control module and a control module, wherein the acquisition module is configured to perform the process of playing the initial whole-body action and independently acquire target action video data of each detail part;
The detail part identification module is configured to execute target action video data for identifying each detail part to obtain a target posture estimation result of each detail part.
In one embodiment, the driving module is configured to drive the three-dimensional model in real time according to the obtained target posture estimation result of the detail part and the posture estimation results of other parts in the process of obtaining the target posture estimation result of one detail part, until the target posture estimation results of all detail parts are obtained, and drive the three-dimensional model according to the target posture estimation results of all detail parts and the posture estimation results of other parts, so as to obtain the target whole body action of the three-dimensional model.
In one embodiment, the acquiring module is further configured to perform acquiring pre-saved initial motion video data of the whole body;
The device also comprises a whole body gesture recognition module which is configured to execute the initial motion video data for recognizing the whole body stored in advance to obtain a whole body gesture estimation result.
In one embodiment, the acquisition module is further configured to perform real-time acquisition of initial motion video data of the whole body;
the device further comprises a whole body gesture recognition module which is configured to execute the initial motion video data of the whole body acquired in real time to obtain a whole body gesture estimation result.
In one embodiment, the apparatus further comprises a receiving module configured to perform receiving a reacquisition instruction;
the acquisition module is further configured to acquire updated motion video data of the whole body when the part category information included in the re-acquisition instruction is the whole body, wherein the initial motion video data of the whole body and the updated motion video data of the whole body are obtained through the same set of performance motion of the whole body;
The whole body gesture recognition module is further configured to execute the updated action video data for recognizing the whole body to obtain an updated gesture estimation result of the whole body;
The driving module is further configured to perform driving of the three-dimensional model according to the updated posture estimation result of the whole body, and obtain updated initial whole body actions of the three-dimensional model.
In one embodiment, the apparatus further comprises a receiving module configured to perform receiving a reacquisition instruction;
the playing module is configured to play the initial whole body action when the part category information included in the re-acquisition instruction is a detail part;
The acquisition module is further configured to acquire updated motion video data of the detail part in real time in the process of playing the initial whole body motion, wherein the updated motion video data of the detail part and the target motion video data of the detail part are obtained through the performance motion of the same set of detail part;
The detail part identification module is further configured to execute the updating action video data for identifying the detail part to obtain an updating posture estimation result of the detail part;
The driving module is further configured to perform the updating whole body action of the three-dimensional model according to the updated posture estimation result of the detail part and the posture estimation result of other parts.
In one embodiment, the detail region includes at least one of a hand and a face.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions to implement the motion capture method described in any of the embodiments of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the motion capture method described in any one of the embodiments of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the motion capture method as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
The method comprises the steps of obtaining a whole body posture estimation result, wherein the whole body posture estimation result comprises a detail part posture estimation result and posture estimation results of other parts except the detail part, the detail part is a part of the whole body, driving a three-dimensional model according to the whole body posture estimation result to obtain an initial whole body motion of the three-dimensional model, collecting target motion video data of the detail part in real time in the process of playing the initial whole body motion, identifying the target motion video data of the detail part to obtain a target posture estimation result of the detail part, and driving the three-dimensional model according to the target posture estimation result of the detail part and the posture estimation results of other parts to obtain a target whole body motion of the three-dimensional model. According to the scheme of the disclosure, on one hand, based on an interactive operation flow, a user can split a whole body motion capturing process into a plurality of parts and a process of multiple acquisition, and motion synchronization is realized in a real-time preview mode, so that high-quality whole body motion information can be obtained, and on the other hand, motion capturing is performed based on a pure vision method, and high-precision motion acquisition is realized in lower cost.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is an application environment diagram illustrating a motion capture method according to an example embodiment.
FIG. 2 is a flowchart illustrating a method of motion capture, according to an example embodiment.
Fig. 3 is a flowchart illustrating a procedure for re-acquiring whole body motion according to an exemplary embodiment.
FIG. 4 is a flowchart illustrating a step of re-acquiring detail points according to an exemplary embodiment.
FIG. 5 is a flowchart illustrating a method of motion capture, according to an example embodiment.
FIG. 6 is a block diagram of a motion capture device, according to an example embodiment.
Fig. 7 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The motion capture method provided by the disclosure can be applied to an application environment as shown in fig. 1. The application environment includes a terminal 110 and a video capture device 120. The video capturing device 120 may be a built-in device of the terminal 110 or may be a separate device, which is not limited herein. The terminal 110 is deployed with a motion capture system for performing motion capture based on a pure vision method, and is configured to process motion video data acquired by the video acquisition device 120 in real time, obtain a gesture estimation result corresponding to the motion video data, and drive a predefined three-dimensional model according to the obtained gesture estimation result. The terminal 110 further includes a display screen for playing the three-dimensional model driven by the posture estimation result in real time, thereby enabling the user to preview the action effect of the three-dimensional model in real time. Specifically, the terminal 110 obtains a whole body posture estimation result, wherein the whole body posture estimation result comprises a detailed part posture estimation result and other part posture estimation results except the detailed part, the detailed part is a part of the whole body, the terminal 110 drives a three-dimensional model according to the whole body posture estimation result to obtain an initial whole body motion of the three-dimensional model, the video acquisition device 120 acquires target motion video data of the detailed part in real time during playing the initial whole body motion, the target motion video data of the detailed part is identified to obtain a target posture estimation result of the detailed part, and the three-dimensional model is jointly driven according to the target posture estimation result of the detailed part and the posture estimation results of other parts to obtain a target whole body motion of the three-dimensional model. The terminal 110 plays the target whole body motion of the three-dimensional model so that the user can preview the motion synchronization effect of other parts and detail parts. Among them, the terminal 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The video capture device 120 may be, but is not limited to, at least one of a video camera, a still camera, and a video camera mounted on an electronic device.
Fig. 2 is a flowchart illustrating a motion capture method according to an exemplary embodiment, as shown in fig. 2, for use in the terminal 110, including the following steps.
In step S210, a whole body posture estimation result is acquired, the whole body posture estimation result including a posture estimation result of a detail part, which is a part of the whole body, and posture estimation results of other parts than the detail part.
The detailed part may refer to a part which is difficult to identify in the human body, such as a hand, facial five sense organs, a wrist, and the like. Motion capture is based on a set of whole body motions that require performance, including detailed part motions and motions of other parts than the detailed part. Specifically, the posture estimation result of the whole body is not limited to include the rotation angle of the joints of the whole body. The whole body posture estimation result can be obtained by adopting the trained first neural network model to identify the initial motion video data of the whole body. The initial motion video data of the whole body can be acquired in advance through video acquisition equipment, or can be acquired in real time when motion capture is needed. After the initial motion video data of the whole body is obtained, the initial motion video data of the whole body is identified through the trained first neural network model, and a posture estimation result of the whole body is obtained. The first neural network model may be any model that can be used for human body pose estimation, for example DensePose (real-time human body pose estimation model), openPose (keypoint detection model).
In step S220, the three-dimensional model is driven according to the posture estimation result of the whole body, and the initial whole body motion of the three-dimensional model is obtained.
Specifically, after acquiring a whole body posture estimation result, the terminal drives the three-dimensional model by using the whole body posture estimation result to obtain an initial whole body action of the three-dimensional model, and plays the initial whole body action of the three-dimensional model through a screen. The three-dimensional model can be a preselected three-dimensional virtual image, such as virtual even image live broadcast, virtual host, body magic special effect under the short video application scene, and the like, and a user can select a proper three-dimensional model for use according to actual requirements.
In step S230, the target motion video data of the detail part is acquired in real time during the initial whole body motion of the three-dimensional model.
In step S240, the target motion video data of the detail part is identified, and a target posture estimation result of the detail part is obtained.
In step S250, the three-dimensional model is driven together according to the target posture estimation result of the detail part and the posture estimation results of other parts, so as to obtain the target whole body motion of the three-dimensional model.
The terminal is provided with a trained second neural network model which is used for identifying the collected target action video data of the detail part to obtain a target attitude estimation result of the detail part. The second neural network model may be dependent on the type of detail part, for example, when the detail part is a hand, the second neural network model may be any gesture capturing algorithm capable of being used for gesture pose estimation, for example, hand Pose Estimation (gesture pose estimation algorithm), and when the detail part is a face, the second neural network model may be any expression recognition algorithm capable of being used for expression recognition, for example, DEEP FACIAL Expression Recognition (deep expression recognition algorithm).
Specifically, in the process of playing the initial whole-body action of the three-dimensional model, the user performs synchronous performance of the actions of the detail parts along with the playing progress of the initial whole-body action. The terminal acquires the target action video data of the detail part in real time, and inputs the acquired target action video data of the detail part into a pre-deployed second neural network model. The terminal acquires a target attitude estimation result of a detail part currently output by the second neural network model. And driving the three-dimensional model together according to the currently output target posture estimation result of the detail part and the posture estimation result of other parts which are currently played to obtain the target whole body action of the current three-dimensional model. The terminal can also play the target whole body action of the current three-dimensional model in the screen, so that the user can preview the action synchronization effect of other parts and detail parts in real time.
In the motion capture method, on one hand, based on an interactive operation flow, a user can split the motion capture process of the whole body into a plurality of parts and a process of multiple acquisition, and motion synchronization is realized in a real-time preview mode, so that high-quality whole body motion information can be obtained, and on the other hand, motion capture is performed based on a pure vision method, and high-precision motion acquisition is realized in lower cost.
In an exemplary embodiment, the number of detail points includes a plurality, and in step S230, capturing the target motion video data of the detail points in real time during the playing of the initial whole-body motion includes capturing the target motion video data of each detail point individually during the playing of the initial whole-body motion.
Specifically, when the number of the detail parts includes a plurality of detail parts, the terminal can drive the three-dimensional model according to the acquired posture estimation result of the whole body, obtain the initial whole body action of the three-dimensional model, and play the initial whole body action. In the process of playing the initial whole-body action, the user performs the synchronous performance of the first detail part along with the playing progress of the initial whole-body action. The video acquisition equipment is adopted to aim at the first detail part for independent acquisition, so that target action video data of the first detail part is obtained. After the target motion video data of the first detail part is collected, the terminal replays the initial whole body motion of the three-dimensional model, and in the process of replaying the initial whole body motion, the target motion video data of the second detail part is collected independently according to the collection process of the target motion video data of the first detail part. Repeating the above process until the target action video data acquisition of all the detail parts is completed.
In this embodiment, in step S240, the target motion video data of the detail region is identified to obtain a target pose estimation result of the detail region, including identifying the target motion video data of each detail region to obtain the target pose estimation result of each detail region.
Specifically, the target gesture estimation result of each detail part can be obtained through a real-time identification mode, namely, the target motion video data of each detail part is acquired while the target motion video data of the current moment is identified through a second neural network model, or can be obtained through an offline identification mode, namely, the target motion video data of each detail part is completely identified through the second neural network model after the acquisition is completed, and the method is not limited herein.
In this embodiment, by separately collecting the target motion video data of each detail part, a clear motion of each detail part can be obtained, thereby improving the quality of motion capture.
In an exemplary embodiment, in step S250, the three-dimensional model is driven together according to the target posture estimation result of the detail part and the posture estimation results of other parts to obtain a target whole body action of the three-dimensional model, which includes driving the three-dimensional model together in real time according to the obtained target posture estimation result of the detail part and the posture estimation results of other parts in the process of obtaining the target posture estimation result of each detail part until the target posture estimation results of all detail parts are obtained, and driving the three-dimensional model together according to the target posture estimation results of all detail parts and the posture estimation results of other parts to obtain the target whole body action of the three-dimensional model.
Specifically, when the number of the detail parts includes a plurality of detail parts, the terminal jointly drives the three-dimensional model according to the obtained target posture estimation result of the detail part at the current moment and the posture estimation results of other parts played at the current moment to obtain the whole body action of the three-dimensional model at the current moment while identifying and obtaining the target posture estimation result of one detail part in a real-time or off-line mode. The terminal can play the whole body action of the three-dimensional model at the current moment in the screen, so that the user can preview the action synchronization effect of other parts and detail parts in real time. And driving the three-dimensional model together according to the target attitude estimation results of all the detail parts and the attitude estimation results of other parts until the target attitude estimation result of the last detail part is obtained by recognition, so as to obtain the target whole body action of the three-dimensional model.
In this embodiment, the target motion video data of each detail part is collected separately, and each detail part motion and other part motions are synchronized in an interactive manner, so that a user can know the motion synchronization effects of different parts in real time.
In an exemplary embodiment, in step S210, the posture estimation result of the whole body is obtained, including obtaining pre-stored initial motion video data of the whole body, and identifying the pre-stored initial motion video data of the whole body to obtain the posture estimation result of the whole body.
Specifically, the whole body posture estimation result can be obtained through offline recognition. A set of performance actions of a user are shot by adopting video acquisition equipment in advance, initial action video data of the whole body is obtained, and the initial action video data of the whole body is stored in a terminal. When motion capture is needed, the terminal calls a first neural network model which is deployed in advance to identify initial motion video data of the whole body, and a whole body posture estimation result is obtained.
In the embodiment, the method can reduce the operation pressure of the equipment during motion capture by collecting and identifying the initial motion video data of the whole body in an off-line mode, and in addition, the user can collect the initial motion video data of the whole body in advance, so that the use flexibility is improved.
In an exemplary embodiment, in step S210, a posture estimation result of the whole body is obtained, including acquiring initial motion video data of the whole body in real time, and identifying the initial motion video data of the whole body acquired in real time to obtain the posture estimation result of the whole body.
Specifically, the whole body posture estimation result can be identified in a real-time manner. When motion capture is needed, the video acquisition equipment is adopted to shoot the performance motion of the user in real time, so as to obtain initial motion video data. The terminal calls a first neural network model which is deployed in advance to identify initial motion video data of the whole body acquired at the current moment, and a whole body posture estimation result at the current moment is obtained. And driving the three-dimensional model according to the posture estimation result of the whole body at the current moment to obtain the initial whole body action of the three-dimensional model. In the embodiment, the initial motion video data of the whole body is acquired and identified in a real-time mode, so that a user can perform motion capture at any time, and the convenience of use is improved.
In an exemplary embodiment, as shown in fig. 3, after obtaining the target whole-body motion of the three-dimensional model, the method further includes:
In step S310, when the re-acquisition instruction is received and the part type information included in the re-acquisition instruction is the whole body, updated motion video data of the whole body is acquired, wherein the initial motion video data of the whole body and the updated motion video data of the whole body are obtained through the same set of performance motion of the whole body.
In step S320, the updated motion video data of the whole body is recognized, and the updated posture estimation result of the whole body is obtained.
In step S330, the three-dimensional model is driven according to the updated posture estimation result of the whole body, and the updated initial whole body motion of the three-dimensional model is obtained.
Specifically, in the process of playing the initial whole body action or the target whole body action, if the user judges that the action effect finally presented does not meet the requirement by previewing the action effect of the three-dimensional model in real time, the user can select the type of the part to be collected again through the terminal to trigger a re-collection instruction. When the part category information included in the re-acquisition instruction is whole body, the user performs the same whole body action towards the video acquisition equipment, so that the terminal acquires the updated action video data of the whole body re-acquired by the video acquisition equipment. The terminal can identify and obtain the updated posture estimation result of the whole body in a real-time or off-line mode. The updated posture estimation result of the whole body is obtained, and meanwhile, the three-dimensional model can be driven according to the updated posture estimation result of the whole body, so that updated initial whole body actions of the three-dimensional model are obtained.
In this embodiment, when the user determines that the obtained whole body motion does not meet the requirement, the user may trigger the re-acquisition instruction to re-acquire updated motion video data of the whole body, thereby improving the operability of motion capture.
In an exemplary embodiment, as shown in fig. 4, after obtaining the target whole-body action of the three-dimensional model, the method further includes:
In step S410, when the re-acquisition instruction is received and the location category information included in the re-acquisition instruction is a detailed location, the initial whole body action is played.
In step S420, update motion video data of the detail part is collected in real time during the process of playing the initial whole body motion, wherein the update motion video data of the detail part and the target motion video data of the detail part are obtained through the performance motion of the same set of detail part.
In step S430, the updated motion video data of the detail part is identified, and an updated posture estimation result of the detail part is obtained.
In step S440, the three-dimensional model is driven together according to the updated posture estimation result of the detail part and the posture estimation result of the other parts, so as to obtain the updated whole body motion of the three-dimensional model.
Specifically, in the process of playing the whole body action of the target, if the user judges that the action synchronization effect finally presented does not meet the requirement by previewing the action effect of the three-dimensional model in real time, the user can select the type of the part to be collected again through the terminal to trigger a re-collection instruction. When the part category information included in the re-acquisition instruction is a detail part, the terminal responds to the re-acquisition instruction and plays back the initial whole body action of the three-dimensional model. In the process of playing the initial whole-body action, the user performs the action of the same set of detail parts along with the playing progress of the initial whole-body action. Namely, the video acquisition equipment is adopted to acquire the detail part, so that the updated action video data of the detail part is obtained. The terminal calls the second neural network model to identify the updated action video data of the detail part, and an updated posture estimation result of the detail part is obtained. And driving the three-dimensional model together according to the currently output updated posture estimation result of the detail part and the posture estimation result of other parts which are currently played to obtain the updated whole body action of the three-dimensional model.
In this embodiment, when the user determines that the motion synchronization effect of the detail part and other parts does not meet the requirement, the user may trigger the re-acquisition instruction to re-acquire the updated motion video data of the detail part, thereby improving the operability of motion capture.
FIG. 5 is a flowchart illustrating a particular motion capture method, according to an example embodiment. In the related art, when a whole body of a human body is drawn by adopting a label-free visual motion capturing mode, the hand and the face occupy too few picture pixels to be too fuzzy, and the hand is often shielded by the body, so that clear whole body motion is difficult to obtain. Thus, in the present embodiment, the detail portion includes a hand and a face. The hands include a left hand and a right hand. At least one neural network model capable of recognizing the whole body, hands and faces is pre-deployed in the terminal. As shown in fig. 5, the motion capture method includes the following steps.
In step S501, initial motion video data of the whole body of the user is acquired by the video acquisition device. The video capture device may be a WEBCAM (WEB CAMER, WEBCAM) camera. To ensure the integrity and consistency of the performance.
In step S502, the body posture capturing algorithm is used to identify the initial motion video data of the whole body, and a posture estimation result of the whole body is obtained. The body posture capturing algorithm may be a real-time algorithm or an offline algorithm, and is not limited herein.
Wherein the posture estimation result of the whole body is not limited to include the rotation angle of each skeletal joint. Since the hand and face occupy a small pixel area in the entire picture, the hand and face information may not be used by the body pose capturing algorithm.
In step S503, the three-dimensional model is driven according to the posture estimation result of the whole body, and the initial whole body motion of the three-dimensional model is obtained.
In step S504, target motion video data of the first hand is acquired during the initial whole-body motion of the three-dimensional model is played. Specifically, the WEBCAM camera can be used to capture at the first hand of the user.
In step S505, the gesture capturing algorithm is used to identify the target motion video data of the first hand, so as to obtain the target gesture estimation result of the first hand. The gesture capture algorithm may be any algorithm that can be used for gesture pose estimation, for example Hand Pose Estimation (gesture pose estimation algorithm). The target posture estimation result of the first hand is not limited to include the rotation angle of each finger joint in the first hand. The gesture capturing algorithm is a real-time algorithm, namely, the target gesture estimation result of the first hand is obtained by real-time identification while the target motion video data of the first hand is collected.
In step S506, the three-dimensional model is driven according to the target posture estimation result of the first hand and the posture estimation result of the other part, and the first hand motion and the other part motion of the three-dimensional model are obtained. Further, the terminal plays the first hand action and other part actions of the three-dimensional model in real time in the screen, so that a user can preview the action synchronization effect of the first hand and other parts in real time, and the action synchronization of the first hand and other parts is ensured.
In step S507, target motion video data of the second hand is acquired during the process of playing the first hand motion and other part motions of the three-dimensional model. Specifically, the WEBCAM camera can be used to aim at a second hand of the user for acquisition.
In step S508, the gesture capturing algorithm is used to identify the target motion video data of the second hand, so as to obtain the target gesture estimation result of the second hand. Wherein the target pose estimation result of the second hand is not limited to include the rotation angle of each finger joint in the second hand. The gesture capturing algorithm is a real-time algorithm, namely, the target motion video data of the second hand is collected and simultaneously the target gesture estimation result of the second hand is obtained through real-time identification.
In step S509, the three-dimensional model is driven based on the target posture estimation result of the first hand, the target posture estimation result of the second hand, and the posture estimation result of the other part, and the first hand operation, the second hand operation, and the other part operation of the three-dimensional model are obtained. Further, the terminal plays the first hand action, the second hand action and other part actions of the three-dimensional model in real time in the screen, so that a user can preview the action synchronization effect of the first hand, the second hand and other parts in real time, and the action synchronization of the first hand, the second hand and the other parts is ensured.
In steps S504-S509, the first hand is left hand and the second hand is right hand, or the first hand is right hand and the second hand is left hand.
In step S510, motion video data of the face is acquired during the process of playing the first hand motion, the second hand motion, and the other part motion of the three-dimensional model. Specifically, the WEBCAM camera can be used to capture the face of the user.
In step S511, the target motion video data of the face is recognized by using the expression recognition algorithm, and the target pose estimation result of the face is obtained. The watch condition recognition algorithm is a real-time algorithm, namely, the target motion video data of the face is collected and simultaneously the target gesture estimation result of the face is obtained through real-time recognition. The expression recognition algorithm may be any algorithm that can be used for expression recognition, for example DEEP FACIAL Expression Recognition (deep expression recognition algorithm).
In step S512, the three-dimensional model is driven according to the target posture estimation result of the first hand, the target posture estimation result of the second hand, the target posture estimation result of the face, and the posture estimation results of other parts, so as to obtain the target whole body motion of the three-dimensional model. Further, the terminal plays the target whole body action of the three-dimensional model in real time in the screen, so that a user can preview the synchronization effect of the whole body action in real time, and the whole body action synchronization is ensured.
The above-described order of collecting the first hand, the second hand, and the face is an exemplary illustration in the present embodiment. In the actual application process, a user can randomly adjust the acquisition sequence of the actions of each detail part according to the requirements.
Further, in the implementation, if the user determines that the motion capturing effect does not meet the requirement through the effect in the process of previewing the motion of the three-dimensional model in real time, the terminal can select the type of the part to be re-collected, and trigger the re-collection instruction. The terminal responds to the re-acquisition instruction and re-acquires the updated action video data corresponding to the position category information according to the position category information carried in the re-acquisition instruction.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of other steps or sub-steps in other steps.
FIG. 6 is a block diagram of a motion capture device 600, according to an example embodiment. Referring to fig. 6, the apparatus includes an acquisition module 601, a driving module 602, an acquisition module 603, and a detail part recognition module 604.
An acquisition module 601 configured to perform acquisition of a posture estimation result of a whole body, the posture estimation result of the whole body including a posture estimation result of a detail part and a posture estimation result of other parts than the detail part, the detail part being a part of the whole body;
A driving module 602 configured to perform driving the three-dimensional model according to the posture estimation result of the whole body, resulting in an initial whole body motion of the three-dimensional model;
An acquisition module 603 configured to perform real-time acquisition of target motion video data of the detail part during the playing of the initial whole-body motion;
A detail part identification module 604 configured to execute the target motion video data for identifying the detail part to obtain a target pose estimation result of the detail part;
The driving module 602 is further configured to perform driving the three-dimensional model together according to the target posture estimation result of the detail part and the posture estimation results of other parts, so as to obtain a target whole body action of the three-dimensional model.
In an exemplary embodiment, the number of detail parts includes a plurality of detail parts, an acquisition module 603 configured to acquire target motion video data of each detail part during playing of the initial whole body motion, and a detail part identification module 604 configured to identify the target motion video data of each detail part and obtain a target pose estimation result of each detail part.
In an exemplary embodiment, the driving module 602 is configured to perform driving the three-dimensional model in real time according to the obtained target posture estimation result of the detail part and the posture estimation results of other parts in the process of obtaining the target posture estimation result of one detail part, until the target posture estimation results of all detail parts are obtained, and driving the three-dimensional model according to the target posture estimation results of all detail parts and the posture estimation results of other parts, so as to obtain the target whole body action of the three-dimensional model.
In an exemplary embodiment, the acquiring module 601 is further configured to perform acquiring pre-stored initial motion video data of the whole body, and the apparatus 600 further includes a whole body gesture recognition module configured to perform recognizing the pre-stored initial motion video data of the whole body to obtain a gesture estimation result of the whole body.
In an exemplary embodiment, the acquisition module 603 is further configured to perform real-time acquisition of initial motion video data of the whole body, and the apparatus 600 further includes a whole body gesture recognition module configured to perform recognition of the real-time acquired initial motion video data of the whole body to obtain a gesture estimation result of the whole body.
In an exemplary embodiment, the apparatus 600 further includes a receiving module configured to perform receiving the re-acquisition instruction, an acquisition module 603 further configured to acquire updated motion video data of the whole body when the location category information included in the re-acquisition instruction is the whole body, wherein the initial motion video data of the whole body and the updated motion video data of the whole body are obtained through a same set of performance motions of the whole body, the apparatus 600 further includes a whole body gesture recognition module configured to perform the updated motion video data for recognizing the whole body to obtain an updated gesture estimation result of the whole body, and a driving module 602 further configured to perform driving the three-dimensional model according to the updated gesture estimation result of the whole body to obtain an updated initial whole body motion of the three-dimensional model.
In an exemplary embodiment, the apparatus 600 further includes a receiving module configured to perform receiving the re-acquisition instruction, a playing module configured to play the initial whole body action when the part category information included in the re-acquisition instruction is a detail part, an acquisition module 603 further configured to perform real-time acquisition of updated motion video data of the detail part during the play of the initial whole body action, wherein the updated motion video data of the detail part and the target motion video data of the detail part are obtained through a performance action of a same set of detail parts, a detail part identification module 604 further configured to perform identification of the updated motion video data of the detail part to obtain an updated posture estimation result of the detail part, and a driving module 602 further configured to perform driving the three-dimensional model together according to the updated posture estimation result of the detail part and posture estimation results of other parts to obtain an updated whole body action of the three-dimensional model.
In an exemplary embodiment, the detail region includes at least one of a hand and a face.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
FIG. 7 is a block diagram illustrating an apparatus 700 for motion capture, according to an example embodiment. For example, device 700 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to FIG. 7, device 700 can include one or more of a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
Memory 704 is configured to store various types of data to support operations at device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, video, and the like. The memory 704 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The power supply component 706 provides power to the various components of the device 700. Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 700.
The multimedia component 708 includes a screen between the device 700 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 700 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the device 700. For example, the sensor assembly 714 may detect an on/off state of the device 700, a relative positioning of the components, such as a display and keypad of the device 700, a change in position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, an orientation or acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the device 700 and other devices, either wired or wireless. The device 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 704, including instructions executable by processor 720 of device 700 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. A motion capture method, comprising:
Acquiring a whole body posture estimation result, wherein the whole body posture estimation result comprises a posture estimation result of a detail part and posture estimation results of other parts except the detail part, the detail part is a part of the whole body, and the number of the detail parts comprises a plurality of parts;
Driving a three-dimensional model according to the posture estimation result of the whole body to obtain an initial whole body action of the three-dimensional model;
The target action video data of each detail part is independently collected in real time in the process of playing the initial whole body action;
Identifying the target action video data of each detail part to obtain a target attitude estimation result of each detail part;
In the process of obtaining the target attitude estimation result of one detail part, driving the three-dimensional model together in real time according to the obtained target attitude estimation result of the detail part and the attitude estimation results of other parts until the target attitude estimation results of all the detail parts are obtained;
And driving the three-dimensional model together according to the target attitude estimation results of all the detail parts and the attitude estimation results of the other parts to obtain the target whole body action of the three-dimensional model.
2. The motion capture method of claim 1, wherein the obtaining the global pose estimation result comprises:
Acquiring pre-stored initial motion video data of the whole body;
And identifying the pre-stored initial motion video data of the whole body to obtain a posture estimation result of the whole body.
3. The motion capture method of claim 1, wherein the obtaining the global pose estimation result comprises:
collecting initial motion video data of the whole body in real time;
And identifying the initial motion video data of the whole body acquired in real time to obtain a posture estimation result of the whole body.
4. The motion capture method of claim 2 or 3, wherein after the obtaining the target whole-body motion of the three-dimensional model, further comprising:
When a re-acquisition instruction is received and the part category information included in the re-acquisition instruction is the whole body, acquiring updated motion video data of the whole body, wherein the initial motion video data of the whole body and the updated motion video data of the whole body are obtained through the performance motion of the same whole body;
identifying the updated motion video data of the whole body to obtain an updated posture estimation result of the whole body;
And driving the three-dimensional model according to the updated posture estimation result of the whole body to obtain updated initial whole body actions of the three-dimensional model.
5. The motion capture method of claim 1, wherein the obtaining the target whole-body motion of the three-dimensional model further comprises:
when a re-acquisition instruction is received and the part category information included in the re-acquisition instruction is a detail part, playing the initial whole body action;
Collecting updated motion video data of the detail part in real time in the process of playing the initial whole body motion, wherein the updated motion video data of the detail part and the target motion video data of the detail part are obtained through the performance motion of the same set of detail part;
identifying the updated motion video data of the detail part to obtain an updated posture estimation result of the detail part;
And driving the three-dimensional model together according to the updated posture estimation result of the detail part and the posture estimation results of the other parts to obtain updated whole body actions of the three-dimensional model.
6. The motion capture method of claim 1, wherein the detail region comprises at least one of a hand and a face.
7. A motion capture device, comprising:
An acquisition module configured to perform acquisition of a posture estimation result of a whole body including a posture estimation result of a detail part and a posture estimation result of other parts than the detail part, the detail part being a part of the whole body, the number of detail parts including a plurality;
the driving module is configured to execute driving of the three-dimensional model according to the posture estimation result of the whole body to obtain an initial whole body action of the three-dimensional model;
the acquisition module is configured to perform real-time independent acquisition of target motion video data of each detail part in the process of playing the initial whole body motion;
the detail part identification module is configured to execute the target action video data for identifying each detail part to obtain a target posture estimation result of each detail part;
The driving module is further configured to drive the three-dimensional model in real time according to the obtained target posture estimation result of the detail part and the posture estimation results of the other parts in the process of obtaining the target posture estimation result of each detail part until the target posture estimation results of all detail parts are obtained, and drive the three-dimensional model according to the target posture estimation results of all detail parts and the posture estimation results of the other parts so as to obtain the target whole body action of the three-dimensional model.
8. The motion capture device of claim 7, wherein the acquisition module is further configured to perform acquiring pre-saved initial motion video data of the whole body;
the device also comprises a whole body gesture recognition module which is configured to execute the initial motion video data for recognizing the pre-stored whole body to obtain a gesture estimation result of the whole body.
9. The motion capture device of claim 7, wherein the acquisition module is further configured to perform real-time acquisition of initial motion video data of the whole body;
The device further comprises a whole body gesture recognition module which is configured to execute initial motion video data for recognizing the whole body acquired in real time to obtain a gesture estimation result of the whole body.
10. The motion capture apparatus of claim 8 or 9, wherein the apparatus further comprises:
a receiving module configured to perform receiving a re-acquisition instruction;
the acquisition module is further configured to acquire updated motion video data of the whole body when the part category information included in the re-acquisition instruction is the whole body, wherein the initial motion video data of the whole body and the updated motion video data of the whole body are obtained through a same set of performance motion of the whole body;
The whole body gesture recognition module is further configured to execute the updated action video data for recognizing the whole body to obtain an updated gesture estimation result of the whole body;
the driving module is further configured to perform driving of the three-dimensional model according to the updated posture estimation result of the whole body, and obtain updated initial whole body actions of the three-dimensional model.
11. The motion capture apparatus of claim 7, wherein the apparatus further comprises:
a receiving module configured to perform receiving a re-acquisition instruction;
The device further comprises a playing module configured to play the initial whole body action when the part category information included in the re-acquisition instruction is a detail part;
The acquisition module is further configured to acquire updated motion video data of the detail part in real time in the process of playing the initial whole body motion, wherein the updated motion video data of the detail part and the target motion video data of the detail part are obtained through a same set of performance motions of the detail part;
The detail part identification module is further configured to execute the updating action video data for identifying the detail part to obtain an updating posture estimation result of the detail part;
The driving module is further configured to perform a general motion of the three-dimensional model according to the updated posture estimation result of the detail part and the posture estimation results of the other parts.
12. The motion capture device of any of claims 7, wherein the detail region comprises at least one of a hand and a face.
13. An electronic device, comprising:
A processor;
a memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the motion capture method of any one of claims 1 to 6.
14. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the motion capture method of any one of claims 1-6.
15. A computer program product comprising instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the motion capture method of any one of claims 1 to 6.
CN202010801654.XA 2020-08-11 2020-08-11 Motion capture method, device, electronic device and storage medium Active CN114078280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010801654.XA CN114078280B (en) 2020-08-11 2020-08-11 Motion capture method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010801654.XA CN114078280B (en) 2020-08-11 2020-08-11 Motion capture method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114078280A CN114078280A (en) 2022-02-22
CN114078280B true CN114078280B (en) 2024-12-31

Family

ID=80279898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010801654.XA Active CN114078280B (en) 2020-08-11 2020-08-11 Motion capture method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114078280B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112716B (en) * 2023-04-14 2023-06-09 世优(北京)科技有限公司 Virtual person live broadcast method, device and system based on single instruction stream and multiple data streams

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622591B (en) * 2012-01-12 2013-09-25 北京理工大学 3D (three-dimensional) human posture capturing and simulating system
US10628664B2 (en) * 2016-06-04 2020-04-21 KinTrans, Inc. Automatic body movement recognition and association system
CN109800645A (en) * 2018-12-18 2019-05-24 武汉西山艺创文化有限公司 A kind of motion capture system and its method
CN109782911B (en) * 2018-12-30 2022-02-08 广州嘉影软件有限公司 Whole body motion capture method and system based on virtual reality
CN111191536A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication technology

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
航天员虚拟交互操作训练多体感融合驱动方法研究;邹俞 等;《图学学报》;20180815;第39卷(第4期);第1.1节至1.4节;第2.1节至2.2节;第3.2节 *

Also Published As

Publication number Publication date
CN114078280A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
EP3905203B1 (en) Method and apparatus for processing video, and storage medium
KR102114377B1 (en) Method for previewing images captured by electronic device and the electronic device therefor
CN113747050B (en) Shooting method and equipment
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113794923B (en) Video processing method, device, electronic equipment and readable storage medium
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
US20210297595A1 (en) Method and apparatus for processing video data
CN108776822B (en) Target area detection method, device, terminal and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN112509005B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN110636383A (en) Video playing method and device, electronic equipment and storage medium
CN114078280B (en) Motion capture method, device, electronic device and storage medium
CN112511743B (en) Video shooting method and device
CN111340690B (en) Image processing method, device, electronic equipment and storage medium
CN112565625A (en) Video processing method, apparatus and medium
CN110636377A (en) Video processing method, device, storage medium, terminal and server
CN114143455B (en) Shooting method and device and electronic equipment
CN114078279B (en) Motion capture method, motion capture device, electronic equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
KR102557592B1 (en) Method and apparatus for displaying an image, electronic device and computer-readable storage medium
CN115174812A (en) Video generation method, video generation device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN114143462B (en) Shooting method and device
CN114267075B (en) Face recognition method, face interaction method, device and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant