CN114155324B - Virtual character driving method and device, electronic equipment and readable storage medium - Google Patents
Virtual character driving method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN114155324B CN114155324B CN202111467341.6A CN202111467341A CN114155324B CN 114155324 B CN114155324 B CN 114155324B CN 202111467341 A CN202111467341 A CN 202111467341A CN 114155324 B CN114155324 B CN 114155324B
- Authority
- CN
- China
- Prior art keywords
- virtual character
- control information
- target
- action
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for driving a virtual character, where the method for driving a virtual character includes: the method comprises the steps that control information of a virtual character is obtained through a motion capture device, the control information is used for driving the virtual character to make target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the distance between at least two groups of feature points to be identified to the distance between the matched controlled feature points is larger than a preset ratio threshold; under the condition that the target action is the contact action meeting the condition, adjusting the control information to obtain target control information; based on the target control information, the virtual character is driven to make corresponding actions. According to the embodiment of the application, the control information can be adjusted under the condition that the stature proportion between the actor and the virtual character is not matched, so that the action of the virtual character can achieve the expected effect, and the visual experience of audiences is improved.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a virtual character driving method, device, electronic equipment and storage medium.
Background
In recent years, the live video broadcast occupies an increasing proportion in the live video broadcast service, and the live video broadcast is to replace the real image of the host with a specific virtual image, specifically, the control signal of the action expression data about the actor (the person in the middle) is obtained through the external hardware equipment, and the virtual image action in the 3D engine is driven.
However, due to the difference in stature (e.g., different heights) between the actor and the avatar, when some motion (e.g., clapping motion) is performed by the actor, the motion of the avatar may not achieve the desired effect, for example, if the actor has achieved clapping motion, but due to the shorter arm of the avatar, the two palms of the avatar are not contacted, and thus a null phenomenon occurs.
Disclosure of Invention
The embodiment of the disclosure at least provides a virtual character driving method, a virtual character driving device, electronic equipment and a storage medium.
The embodiment of the disclosure provides a driving method of a virtual character, which is applied to an electronic device, the electronic device runs a 3D rendering environment, the 3D rendering environment comprises 3D scene information, the 3D scene information is used for generating a 3D scene after rendering, the 3D scene information comprises at least one virtual character information, and the virtual character information is used for generating a virtual character after rendering, and the method comprises the following steps:
The method comprises the steps that control information of the virtual character is obtained through a motion capture device, the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified to the spacing between the matched controlled feature points is larger than a preset ratio threshold;
under the condition that the target action is a contact action meeting the condition, adjusting the control information to obtain target control information;
and driving the virtual character to make corresponding actions based on the target control information.
In the embodiment of the disclosure, when the ratio of the distance between the feature point to be identified of the motion capture device and the controlled feature point of the virtual character matched with the feature point is greater than the preset ratio threshold, that is, when the actor is not matched with the virtual character model, if the target motion is detected as a contact motion conforming to the condition, the acquired control information is adjusted to obtain the target control information, and then the virtual character is driven to perform a corresponding motion based on the target control information, so that the motion made by the virtual character conforms to the expected effect, the effect consistent with the contact motion of the actor is achieved, and the visual effect of the virtual character is improved.
In a possible implementation manner, the adjusting the control information to obtain target control information includes:
judging whether the target action belongs to a preset contact action or not;
determining a preset adjustment strategy matched with the target action based on a mapping relation table of the preset contact action and the adjustment strategy under the condition that the target action belongs to the preset contact action;
and adjusting the control information based on the preset adjustment strategy to obtain the target control information.
In the embodiment of the disclosure, when the target action belongs to the preset contact action, the preset adjustment strategy matched with the target action can be directly determined according to the mapping relation table of the preset contact action and the adjustment strategy, so that the determination efficiency of the adjustment strategy can be improved, and the adjustment efficiency of the control information is further improved.
In one possible implementation, there are multiple virtual roles in the 3D scene, each virtual role corresponding to a different adjustment policy; the determining a preset adjustment strategy matched with the target action comprises the following steps:
determining a target virtual character matched with the target action;
And determining a preset adjustment strategy corresponding to the target virtual role.
In the embodiment of the disclosure, under the condition that a plurality of virtual roles exist in the 3D scene, corresponding preset adjustment strategies are determined according to different virtual roles, so that the adjusted control information better accords with the corresponding virtual roles, and the expected effect of the target action is further improved.
In one possible embodiment, the method further comprises:
determining a target adjustment strategy based on attribute information of the target action under the condition that the target action does not belong to the preset contact action;
and adjusting the control information based on the target adjustment strategy to obtain the target control information.
In the embodiment of the disclosure, when the target action does not belong to the preset contact action, the target adjustment strategy is determined based on the attribute information of the target action, so that the determined target adjustment strategy meets the adjustment requirement of the current target action, and the adjusted action achieves the expected effect.
In a possible implementation manner, the driving the virtual character to make a corresponding action based on the target control information includes:
Judging whether the target control information meets a preset requirement or not;
and under the condition that the target control information meets the preset requirement, driving the virtual character to make corresponding action based on the target control information.
In one possible embodiment, the method further comprises:
under the condition that the target control information does not meet the preset requirement, the virtual role information is adjusted, and adjusted virtual role information is obtained; the adjusted virtual character information is used for generating an adjusted virtual character after rendering;
and driving the adjusted virtual roles to make corresponding actions based on the target control information.
In the embodiment of the disclosure, the information of the virtual character is further adjusted under the condition that the target control information does not meet the preset requirement, so that the corresponding action made by the virtual character meets the expected requirement.
In a possible implementation manner, the adjusting the virtual character information includes:
determining the action part of the virtual character according to the target control information;
and adjusting the length information of the action part.
In the embodiment of the disclosure, the action part is determined by the control information, and the length information of the action part can be adjusted, so that the efficiency of adjusting the virtual character information can be improved.
The embodiment of the disclosure provides a driving device for a virtual character, comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring control information of the virtual character through a motion capture device, the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified and the spacing between the matched controlled feature points is larger than a preset ratio threshold;
the adjusting module is used for adjusting the control information under the condition that the target action is the contact action meeting the condition to obtain target control information;
and the driving module is used for driving the virtual character to make corresponding actions based on the target control information.
In one possible embodiment, the adjustment module is specifically configured to:
judging whether the target action belongs to a preset contact action or not;
determining a preset adjustment strategy matched with the target action based on a mapping relation table of the preset contact action and the adjustment strategy under the condition that the target action belongs to the preset contact action;
And adjusting the control information based on the preset adjustment strategy to obtain the target control information.
In one possible implementation, there are multiple virtual roles in the 3D scene, each virtual role corresponding to a different adjustment policy; the adjusting module is specifically used for:
determining a target virtual character matched with the target action;
and determining a preset adjustment strategy corresponding to the target virtual role.
In one possible embodiment, the adjustment module is further configured to:
determining a target adjustment strategy based on attribute information of the target action under the condition that the target action does not belong to the preset contact action;
and adjusting the control information based on the target adjustment strategy to obtain the target control information.
In one possible embodiment, the driving module is specifically configured to:
judging whether the target control information meets a preset requirement or not;
and under the condition that the target control information meets the preset requirement, driving the virtual character to make corresponding action based on the target control information.
In one possible embodiment, the driving module is specifically configured to:
Under the condition that the target control information does not meet the preset requirement, the virtual role information is adjusted, and adjusted virtual role information is obtained; the adjusted virtual character information is used for generating an adjusted virtual character after rendering;
and driving the adjusted virtual roles to make corresponding actions based on the target control information.
The embodiment of the disclosure provides an electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of driving a virtual character as described above.
The disclosed embodiments provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor performs a method of driving a virtual character as described above.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a flowchart illustrating a first virtual character driving method according to an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a relationship between a virtual character and an actor provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a method for determining an adjustment policy based on a target action provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target action implemented by a virtual character according to current control information according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a target action implemented by a virtual character according to adjusted control information according to an embodiment of the present disclosure;
FIG. 6 illustrates a flow chart of a method for driving a virtual character to perform a corresponding action provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural view illustrating a driving apparatus for a virtual character according to an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In recent years, the live video broadcast occupies an increasing proportion in the live video broadcast service, the live video broadcast is to replace the real image of the host with a specific virtual image, specifically, the expression and the limb action of the host are applied to the specific virtual image in real time, so that the virtual image can synchronously realize the corresponding expression and limb action along with the expression and the limb action of the host, and the like, and the requirements of specific watching people, such as people interested in secondary culture, can be met through the emerging live broadcast mode.
However, due to the difference in stature (e.g., different heights) between the actor and the avatar, when some motion (e.g., clapping motion) is performed by the actor, the motion of the avatar may not achieve the desired effect, for example, if the actor has achieved clapping motion, but due to the shorter arm of the avatar, the two palms of the avatar are not contacted, and thus a null phenomenon occurs.
The present disclosure provides a driving method of a virtual character, which is applied to an electronic device, the electronic device runs a 3D rendering environment, the 3D rendering environment includes 3D scene information, the 3D scene information is used for generating a 3D scene after rendering, the 3D scene information includes at least one virtual character information, and the virtual character information is used for generating a virtual character after rendering, the method includes:
the method comprises the steps that control information of the virtual character is obtained through a motion capture device, the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified to the spacing between the matched controlled feature points is larger than a preset ratio threshold;
Under the condition that the target action is a contact action meeting the condition, adjusting the control information to obtain target control information;
and driving the virtual character to make corresponding actions based on the target control information.
The 3D scene information may be run in a computer CPU (Central Processing Uni, central processing unit), GPU (Graphics Processing Unit, graphics processor) and memory, which contains gridded model information and sum map texture information. Accordingly, the virtual character information includes, by way of example, but is not limited to, gridded model data, voxel data, and map texture data, or a combination thereof. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal meshes, or combinations thereof. In the embodiment of the disclosure, the mesh is a triangle mesh.
The 3D scene information is rendered in a 3D rendering environment, and a 3D scene can be generated. The 3D rendering environment may be a 3D engine running in the electronic device capable of generating image information based on one or more perspectives based on the data to be rendered. The virtual character information is a virtual character model existing in the 3D engine, and a corresponding virtual character can be generated after rendering. In the disclosed embodiments, the virtual character may include a virtual anchor or a digital person. Wherein, the image of the virtual anchor can be cartoon image or cartoon image.
In the embodiment of the disclosure, if the ratio of the distance between the feature point to be identified of the motion capture device and the controlled feature point of the virtual character matched with the feature point is greater than a preset ratio threshold, that is, if the actor is not matched with the virtual character model, the control information is adjusted to obtain target control information if the target motion is detected as a contact motion conforming to a condition; and then driving the virtual character to make corresponding actions based on the target control information, so that the actions made by the virtual character can accord with expected effects, the effects consistent with the actions of actors are achieved, and the visual effects of the virtual character are improved.
The execution main body of the virtual character driving method provided by the embodiment of the disclosure is generally an electronic device with a certain computing capability, and the electronic device may be a server, where the server may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligent platform, and the like. In other embodiments, the electronic device may be a terminal device or other processing device, and in addition, the method for driving the virtual character may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a method for driving a virtual character according to an embodiment of the present disclosure includes the following steps S101 to S103:
s101, acquiring control information of the virtual character through a motion capture device, wherein the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified and the spacing between the matched controlled feature points is larger than a preset ratio threshold.
The virtual character may be called an avatar, and is a character object that is created in a drawing, animation, computer animation, or the like, and that is activated in a virtual scene such as the internet, but does not exist in a physical form. In the embodiment of the present disclosure, the avatar of the avatar may refer to the avatar 10 and the avatar 20 in fig. 2.
The motion capture device includes clothing worn on the actor's body, gloves worn on the actor's hand, and the like. Wherein, the clothes are used for capturing the limb movements of the actors, and the glove is used for capturing the hand movements of the actors. Specifically, the motion capture device includes a plurality of feature points to be identified, which may correspond to key points of the bones of the actor. For example, feature points may be set at positions of the motion capture device corresponding to joints (such as knee joints, elbow joints, and finger joints) of bones of the actor, where the feature points may be made of a specific material (such as nanomaterial), and further, position information of the feature points may be acquired through a camera to obtain control information.
Accordingly, in order to drive the virtual character, the virtual character includes controlled feature points matched with the plurality of feature points to be identified, for example, the feature points to be identified of the elbow joint of the actor are matched with the elbow joint controlled points of the virtual character, that is, there is a one-to-one correspondence between skeleton key points of the actor and skeleton key points of the virtual character, so after control information of the feature points to be identified of the elbow joint of the actor is obtained, corresponding change of the elbow joint of the virtual character can be driven, and then action change of the virtual character is formed by the change of the plurality of controlled points.
In the embodiment of the disclosure, the ratio of the distance between at least two groups of the feature points to be identified and the distance between the controlled feature points matched with the distance between the two groups of the feature points to be identified is greater than a preset ratio threshold, that is, the stature of the actor is inconsistent with the stature of the virtual character, or the difference is greater. The ratio may be a ratio of the overall height, may be a ratio of a certain part of the body, for example, may be a ratio of the length of the arm or a ratio of the length of the leg. In addition, the preset ratio threshold may be set according to the actual situation, and may be 1.1 or 1.3, for example.
It should be noted that the motion capture device further includes a camera for capturing facial expression data of the actor, and the motion capture device may further include a sound capture device (such as a microphone, a laryngeal microphone, etc.). Accordingly, the control information in the embodiments of the present disclosure includes at least one of actor limb motion data, actor facial expression data, and actor sound data collected by the motion capture device. After the control information is acquired, the virtual character can be driven to make a target action matched with the control information based on the control information.
For example, referring to fig. 2, at least one virtual character may exist in the 3D scene, in this embodiment, a virtual character 10 and a virtual character 20 are taken as an example, each virtual character corresponds to an actor, as shown in fig. 2, the virtual character 10 corresponds to an actor a, and the virtual character 20 corresponds to an actor B, so that the virtual character 10 can be driven to make a target motion by capturing an action of the actor a, and the virtual character 20 can be driven to make a target motion by capturing an action of the actor B.
The target actions include, but are not limited to, limb actions of the virtual character, facial expression actions of the virtual character, actions of the virtual character and other virtual characters to pull or hug hands, actions of the virtual character to interact with other virtual objects, and the like.
And S102, adjusting the control information to obtain target control information when the target action is a contact action meeting the condition.
It will be appreciated that in the case of a target motion being a conditional touch motion, the control information needs to be adjusted to obtain target control information that meets the expectations. The contact actions that are eligible include, but are not limited to, contact actions between two parts of the same virtual character itself (such as clapping actions), contact actions between different virtual characters (such as hand pulling actions), and contact actions between a virtual character and other virtual objects (such as actions of a virtual character holding a gun). The virtual objects include, but are not limited to, various 3D props, various virtual animals, various virtual plants, and the like.
And S103, driving the virtual roles to make corresponding actions based on the target control information.
After the target control information is obtained, the virtual character can be driven to make corresponding actions based on the target control information.
In the embodiment of the disclosure, if the ratio of the distance between the feature point to be identified of the motion capture device and the controlled feature point of the virtual character matched with the feature point is greater than a preset ratio threshold, that is, if the actor is not matched with the virtual character model, the control information is adjusted to obtain target control information if the target motion is detected as a contact motion conforming to a condition; and then driving the virtual character to make corresponding actions based on the target control information, so that the actions made by the virtual character can accord with expected effects, the effects consistent with the actions of actors are achieved, and the visual effects of the virtual character are improved.
Referring to fig. 3, for the above step S102, when the control information is adjusted to obtain the target control information, the following steps S1021 to S1025 may be included:
s1021, judging whether the target action belongs to a preset contact action; if yes, go to step S1022; if not, step S1024 is performed.
It can be appreciated that, in order to improve the adjustment efficiency, the contact action with a higher occurrence frequency in a specific 3D scene may be set as a preset contact action according to specific requirements, for example, in a 3D scene, the virtual character needs to often clap or touch the face with hands, at this time, the clap or the face may be set as the preset contact action, and corresponding adjustment strategies may be respectively determined according to the ratio between the actor and the virtual character. Thus, a mapping relation table between the preset actions and the adjustment strategies can be formed, for example, the X preset actions correspond to the X adjustment strategies, the Y preset actions correspond to the Y adjustment strategies, and the Z preset actions correspond to the Z adjustment strategies.
It should be noted that, the adjustment policies corresponding to different preset actions may be set according to actual requirements, for example, the adjustment policies may be adjustment of control information, adjustment of virtual character information, or an adjustment policy combining the two, which is not limited herein. Wherein, the adjustment of the virtual character information may include adjustment of skeleton information (such as skeleton length information, skeleton thickness information, etc.) of the virtual character.
S1022, determining a preset adjustment strategy matched with the target action based on a mapping relation table of the preset contact action and the adjustment strategy.
In an exemplary embodiment, when it is determined that the target action belongs to the preset contact action, the adjustment policy matched with the target action may be determined according to the mapping relationship table between the preset action and the adjustment policy.
When a plurality of virtual characters exist in a 3D scene, the ratio between different virtual characters and corresponding actors is different, and thus the adjustment policy is different for each virtual character even if the same target motion is performed. Therefore, in the embodiment of the present disclosure, the target virtual character that matches the target action should be determined first, and then the preset adjustment policy corresponding to the target virtual character should be determined. For example, the preset X action corresponds to the adjustment policy of the virtual character 10 in fig. 2 being the adjustment policy X1, and corresponds to the adjustment policy of the virtual character 20 in fig. 2 being the adjustment policy X2.
S1023, adjusting the control information based on the preset adjustment strategy to obtain the target control information.
After determining the preset adjustment policy corresponding to the target action, the control information may be adjusted based on the preset adjustment policy to obtain the target control information, so that the corresponding action made by the virtual character meets the expected requirement.
S1024, determining a target adjustment strategy based on the attribute information of the target action.
It will be appreciated that in the practical application process, the target action may be other contact actions which are not within the preset contact actions, for example, the target action may be an action of driving the virtual character to bend down and then contact the foot, an action of driving the virtual character to pull the hand with other virtual characters, or a contact action of the virtual character to grasp the 3D prop. The contact actions are not in the preset list of contact actions, and at this time, a target adjustment strategy should be determined based on the attribute information of the target actions. That is, the corresponding target adjustment strategy should be determined in real time for different actions, so that the target actions achieve the expected effect.
Illustratively, the attribute information of the target action includes, but is not limited to, the location of the action, the type of the action, the magnitude of the action, and the like. The target adjustment strategy may be to adjust the curvature of the action part or to adjust the placement position of the action part.
S1025, adjusting the control information based on the target adjustment strategy to obtain the target control information.
Specifically, referring to fig. 4, taking the contact action of the virtual character 10 and the other virtual characters 20 for hand pulling as an example, if the control information obtained by the motion capturing device is that the angle between the arm and the body is 30 degrees, that is, the arm of the virtual character is slightly opened at this time, the hand pulling action is realized between the actors, but because the stature between the virtual character and the actor is inconsistent, for example, the stature of the virtual character is smaller than that of the actor, at this time, if the arm of the virtual character is controlled to be opened by 30 degrees according to the control information, the hand pulling action of the virtual character and the other virtual characters is not realized, at this time, the opening angle of the control information can be adjusted to be increased, for example, the angle between the arm and the body is enlarged to 60 degrees, that is, the arm of the virtual character is continuously opened upwards, so that the virtual character 10 and the other virtual characters 20 realize the hand pulling action, thereby achieving the expected effect, as shown in fig. 5.
It will be appreciated that in some embodiments, after adjusting the position or curvature of the action part, the expected effect still cannot be achieved, or taking the example of hand pulling between the virtual characters, if the angle between the arms and the body of the virtual character is adjusted by 90 degrees, the virtual character still cannot achieve hand pulling action with other virtual characters, and at this time, the information of the virtual character should be adjusted, for example, the length of the arms of the virtual character is stretched until hand pulling action with other virtual characters is achieved.
Thus, in some embodiments, referring to fig. 6, for the above step S103, when driving the virtual character to make a corresponding action based on the target control information, the following steps S1031 to S1034 may be included:
s1031, judging whether the target control information meets a preset requirement; if yes, go to step S1032; if not, step S1033 is performed.
S1032, driving the virtual roles to make corresponding actions based on the target control information.
S1033, adjusting the virtual character information to obtain adjusted virtual character information; the adjusted virtual character information is used for generating an adjusted virtual character after rendering.
And S1034, driving the adjusted virtual character to make corresponding action based on the target control information.
For example, after the control information is adjusted, if the expected requirement can be met, the virtual character can be driven to make a corresponding action directly based on the adjusted target control information, and if the expected requirement is not met after the control information is adjusted, the virtual character information is adjusted, specifically, the action part of the virtual character can be determined according to the target control information, and then the length information of the action part is adjusted. Therefore, the expected requirement can be met after the virtual character information is adjusted slightly. For example, if the target action is an action that the virtual character stretches to grasp the virtual object placed at the preset position, if the arm of the virtual character is straightened and the corresponding virtual object cannot be grasped, the length information of the arm of the virtual character should be adjusted until the virtual character achieves the grasping action.
In addition, in some embodiments, at least one virtual shot is further included in the 3D scene information, and the virtual shot is used to capture image information of the 3D scene, and thus, in some embodiments, video data may be generated based on the shot information of the virtual shot and the 3D scene information. Wherein the video data comprises a plurality of video frames.
The generated video data may be shown locally, may form recorded video, and may form a live video stream for live broadcast, for example. For example, if the electronic device has a display screen or is externally connected with a display device, the generated video data can be played locally.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same technical concept, the embodiment of the disclosure further provides a device for driving a virtual character corresponding to the method for driving a virtual character, and since the principle of solving the problem of the device in the embodiment of the disclosure is similar to that of the method for driving a virtual character in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 7, a schematic diagram of a driving apparatus 500 for a virtual character according to an embodiment of the disclosure is shown, where the apparatus includes:
an obtaining module 501, configured to obtain control information of the virtual character through a motion capture device, where the control information is used to drive the virtual character to make a target motion, and the motion capture device includes a plurality of feature points to be identified, where the feature points to be identified are respectively matched with controlled feature points of the virtual character, and a ratio of a distance between at least two groups of feature points to be identified and a distance between the matched controlled feature points is greater than a preset ratio threshold;
the adjustment module 502 is configured to adjust the control information to obtain target control information when the target action is a contact action that meets a condition;
and the driving module 503 is configured to drive the virtual character to perform a corresponding action based on the target control information.
In one possible implementation, the adjustment module 502 is specifically configured to:
judging whether the target action belongs to a preset contact action or not;
determining a preset adjustment strategy matched with the target action based on a mapping relation table of the preset contact action and the adjustment strategy under the condition that the target action belongs to the preset contact action;
And adjusting the control information based on the preset adjustment strategy to obtain the target control information.
In one possible implementation, there are multiple virtual roles in the 3D scene, each virtual role corresponding to a different adjustment policy; the adjustment module 502 is specifically configured to:
determining a target virtual character matched with the target action;
and determining a preset adjustment strategy corresponding to the target virtual role.
In one possible implementation, the adjustment module 502 is further configured to:
determining a target adjustment strategy based on attribute information of the target action under the condition that the target action does not belong to the preset contact action;
and adjusting the control information based on the target adjustment strategy to obtain the target control information.
In one possible implementation, the driving module 503 is specifically configured to:
judging whether the target control information meets a preset requirement or not;
and under the condition that the target control information meets the preset requirement, driving the virtual character to make corresponding action based on the target control information.
In one possible implementation, the driving module 503 is specifically configured to:
Under the condition that the target control information does not meet the preset requirement, the virtual role information is adjusted, and adjusted virtual role information is obtained; the adjusted virtual character information is used for generating an adjusted virtual character after rendering;
and driving the adjusted virtual roles to make corresponding actions based on the target control information.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 8, a schematic structural diagram of an electronic device 700 according to an embodiment of the disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is configured to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In the embodiment of the present application, the memory 702 is specifically configured to store application program codes for executing the solution of the present application, and the processor 701 controls the execution. That is, when the electronic device 700 is in operation, communication between the processor 701 and the memory 702 via the bus 703 causes the processor 701 to execute the application code stored in the memory 702, thereby performing the methods described in any of the previous embodiments.
The Memory 702 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 700. In other embodiments of the present application, electronic device 700 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The disclosed embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of driving a virtual character in the method embodiments described above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, and instructions included in the program code may be used to execute the steps of the method for driving a virtual character in the foregoing method embodiments, and specifically refer to the foregoing method embodiments, which are not described herein in detail.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. The utility model provides a virtual role driving method, which is characterized in that the method is applied to electronic equipment, the electronic equipment runs a 3D rendering environment, the 3D rendering environment comprises 3D scene information, the 3D scene information is used for generating a 3D scene after rendering, the 3D scene information comprises at least one virtual role information, and the virtual role information is used for generating a virtual role after rendering, and the method comprises the following steps:
The method comprises the steps that control information of the virtual character is obtained through a motion capture device, the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified to the spacing between the matched controlled feature points is larger than a preset ratio threshold;
under the condition that the target action is a contact action meeting the condition, adjusting the control information to obtain target control information; the eligible touch actions include: a contact action between two parts of the same virtual character, a contact action between different virtual characters, at least one of the contact actions between the virtual character and other virtual objects;
and driving the virtual character to make corresponding actions based on the target control information.
2. The method of claim 1, wherein the adjusting the control information to obtain target control information comprises:
judging whether the target action belongs to a preset contact action or not;
Determining a preset adjustment strategy matched with the target action based on a mapping relation table of the preset contact action and the adjustment strategy under the condition that the target action belongs to the preset contact action;
and adjusting the control information based on the preset adjustment strategy to obtain the target control information.
3. The method of claim 2, wherein there are multiple virtual roles in the 3D scene, each virtual role corresponding to a different adjustment policy; the determining a preset adjustment strategy matched with the target action comprises the following steps:
determining a target virtual character matched with the target action;
and determining a preset adjustment strategy corresponding to the target virtual role.
4. The method according to claim 2, wherein the method further comprises:
determining a target adjustment strategy based on attribute information of the target action under the condition that the target action does not belong to the preset contact action;
and adjusting the control information based on the target adjustment strategy to obtain the target control information.
5. The method of claim 4, wherein driving the virtual character to take a corresponding action based on the target control information comprises:
Judging whether the target control information meets a preset requirement or not;
and under the condition that the target control information meets the preset requirement, driving the virtual character to make corresponding action based on the target control information.
6. The method of claim 5, wherein the method further comprises:
under the condition that the target control information does not meet the preset requirement, the virtual role information is adjusted, and adjusted virtual role information is obtained; the adjusted virtual character information is used for generating an adjusted virtual character after rendering;
and driving the adjusted virtual roles to make corresponding actions based on the target control information.
7. The method of claim 6, wherein said adjusting said virtual character information comprises:
determining the action part of the virtual character according to the target control information;
and adjusting the length information of the action part.
8. A virtual character driving apparatus, comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring control information of the virtual character through a motion capture device, the control information is used for driving the virtual character to make a target motion, the motion capture device comprises a plurality of feature points to be identified, the feature points to be identified are respectively matched with controlled feature points of the virtual character, and the ratio of the spacing between at least two groups of feature points to be identified and the spacing between the matched controlled feature points is larger than a preset ratio threshold;
The adjusting module is used for adjusting the control information under the condition that the target action is the contact action meeting the condition to obtain target control information; the eligible touch actions include: a contact action between two parts of the same virtual character, a contact action between different virtual characters, at least one of the contact actions between the virtual character and other virtual objects;
and the driving module is used for driving the virtual character to make corresponding actions based on the target control information.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the method of driving the virtual character according to any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs a method of driving a virtual character according to any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111467341.6A CN114155324B (en) | 2021-12-02 | 2021-12-02 | Virtual character driving method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111467341.6A CN114155324B (en) | 2021-12-02 | 2021-12-02 | Virtual character driving method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114155324A CN114155324A (en) | 2022-03-08 |
CN114155324B true CN114155324B (en) | 2023-07-25 |
Family
ID=80456300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111467341.6A Active CN114155324B (en) | 2021-12-02 | 2021-12-02 | Virtual character driving method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114155324B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010090856A1 (en) * | 2009-01-21 | 2010-08-12 | Liu C Karen | Character animation control interface using motion capture |
CN108510500A (en) * | 2018-05-14 | 2018-09-07 | 深圳市云之梦科技有限公司 | A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010102288A2 (en) * | 2009-03-06 | 2010-09-10 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for shader-lamps based physical avatars of real and virtual people |
US8334872B2 (en) * | 2009-05-29 | 2012-12-18 | Two Pic Mc Llc | Inverse kinematics for motion-capture characters |
US9898844B2 (en) * | 2013-12-31 | 2018-02-20 | Daqri, Llc | Augmented reality content adapted to changes in real world space geometry |
CN106512398B (en) * | 2016-12-06 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Reminding method in virtual scene and related device |
CN111144266B (en) * | 2019-12-20 | 2022-11-22 | 北京达佳互联信息技术有限公司 | Facial expression recognition method and device |
CN113655889B (en) * | 2021-09-01 | 2023-08-08 | 北京字跳网络技术有限公司 | Virtual character control method, device and computer storage medium |
-
2021
- 2021-12-02 CN CN202111467341.6A patent/CN114155324B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010090856A1 (en) * | 2009-01-21 | 2010-08-12 | Liu C Karen | Character animation control interface using motion capture |
CN108510500A (en) * | 2018-05-14 | 2018-09-07 | 深圳市云之梦科技有限公司 | A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection |
Non-Patent Citations (1)
Title |
---|
基于3D-bodyscanWorX未成年人站姿和坐姿的人类工效学探讨;朱正锋;张明;;中原工学院学报(04);41-44 * |
Also Published As
Publication number | Publication date |
---|---|
CN114155324A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2023521952A (en) | 3D Human Body Posture Estimation Method and Apparatus, Computer Device, and Computer Program | |
CN111260764B (en) | Method, device and storage medium for making animation | |
CN112348937B (en) | Face image processing method and electronic device | |
TW201911082A (en) | Image processing method, device and storage medium | |
CN114612643B (en) | Image adjustment method and device for virtual object, electronic equipment and storage medium | |
CN109688346A (en) | A kind of hangover special efficacy rendering method, device, equipment and storage medium | |
CN112218107B (en) | Live broadcast rendering method and device, electronic equipment and storage medium | |
EP2880633A1 (en) | Animating objects using the human body | |
US20210133433A1 (en) | Method, apparatus, electronic device and storage medium for expression driving | |
JP2023549240A (en) | Image generation method, image generation device, computer equipment, and computer program | |
CN110147737B (en) | Method, apparatus, device and storage medium for generating video | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN110807410A (en) | Key point positioning method and device, electronic equipment and storage medium | |
CN114399424B (en) | Model training methods and related equipment | |
CN107610239B (en) | Virtual try-on method and device for facial makeup | |
CN113840158B (en) | Virtual image generation method, device, server and storage medium | |
CN115063518A (en) | Trajectory rendering method, device, electronic device and storage medium | |
CN115861498A (en) | Redirection method and device for motion capture | |
CN116580151A (en) | Human body three-dimensional model construction method, electronic equipment and storage medium | |
CN115601482A (en) | Digital human action control method and device, equipment, medium and product thereof | |
CN114155324B (en) | Virtual character driving method and device, electronic equipment and readable storage medium | |
CN113887319B (en) | Method, device, electronic device and storage medium for determining three-dimensional posture | |
CN114237396B (en) | Action adjustment method, action adjustment device, electronic equipment and readable storage medium | |
CN113761965B (en) | Motion capture method, motion capture device, electronic equipment and storage medium | |
CN112686990A (en) | Three-dimensional model display method and device, storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |