Meta-universe remote interaction system based on motion capture
Technical Field
The invention relates to the technical field of remote interaction, in particular to a meta-universe remote interaction system based on motion capture.
Background
The metauniverse is essentially a real-world virtualization, digitizing process that requires extensive modification of content production, economic systems, user experience, and physical world content, etc. But the development of the metauniverse is progressive, is finally formed by continuous fusion and evolution of a plurality of tools and platforms under the support of shared infrastructure, standard and protocol, provides immersive experience based on an augmented reality technology, generates a real world mirror image based on a digital twin technology, builds an economic system based on a blockchain technology, integrates a virtual world with the real world closely on an economic system, a social system and an identity system, and allows each user to carry out content production and world editing.
In the process of performing the remote interaction of the meta-universe, the action state of the corresponding person needs to be captured in reality, then the action state of the person is converted into the character corresponding to the meta-universe, the interaction mode is often used in the game development process, such as basketball games, different characters in the basketball games correspond to the respective ball stars in reality, the playing actions of the signboards and the dribble modes of the characters can be copied to the corresponding characters in the basketball games, in the process of performing the interaction of the two characters, the subjectivity of the person in reality is strong, the action content of the same action content corresponds to different action structures in the process of capturing the action, and the action mode is more difficult to identify than the action mode of the ball star in common use, so that the action disorder of the character corresponding to the meta-universe is easily caused.
In order to address the above problems, there is a need for a meta-cosmic remote interactive system based on motion capture.
Disclosure of Invention
The invention aims to provide a meta-universe remote interaction system based on motion capture so as to solve the problems in the background technology.
In order to achieve the above object, a meta space remote interactive system based on motion capture is provided, which comprises a motion state capture module, wherein the motion state capture module is used for capturing real human motion to generate captured images, an output end of the motion state capture module is connected with a motion analysis module, the motion analysis module is used for analyzing the captured images and judging the corresponding motion content of different captured images, an output end of the motion analysis module is connected with a motion frequency calculation module, the motion frequency calculation module is used for judging the occurrence probability of each captured image according to the capture times of different captured images, an output end of the motion frequency calculation module is connected with a database storage module, an input end of the database storage module is connected with an input end of the motion analysis module, the database storage module is used for storing each captured image and the corresponding motion content thereof, simultaneously storing the occurrence probability of each captured image, classifying the different captured images of the same motion content to generate the same type of image, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a preset captured image of the type, and the output end of the database storage module is connected with an adaptation module, the input end of the database storage module is used for determining the appearance probability of each captured image of the character is suitable for a scene of a character, and the character is suitable for selecting a character and the character is suitable for a scene according to the selected character.
As a further improvement of the technical scheme, the motion analysis module comprises a motion structure identification unit, wherein the motion structure identification unit is used for determining a motion structure of a captured image, the output end of the motion structure identification unit is connected with an environmental factor identification unit, the environmental factor identification unit is used for determining the environment where the motion structure of the captured image is positioned currently, the output end of the environmental factor identification unit is connected with a motion content determination unit, and the motion content determination unit determines motion content represented by the motion structure of the captured image currently according to the motion structure of the captured image and the environment where the motion structure of the captured image is positioned.
As a further improvement of the technical scheme, the motion frequency calculation module comprises a unit time determination unit, wherein the unit time determination unit is used for making unit time and counting different motions in the unit time, the output end of the unit time determination unit is connected with an integral motion counting unit, the integral motion counting unit is used for determining the sum of all motions in the unit time, the output end of the integral motion counting unit is connected with each motion calculation unit, and each motion calculation unit is used for calculating the occurrence times of each motion in the unit time.
As a further improvement of the technical scheme, the action frequency calculation module adopts an insertion ordering algorithm, and the algorithm comprises the following steps:
s1, determining the occurrence times of each action in unit time, and drawing up an action time set,/>To->Representing the number of times each action occurs;
s2, selecting a second number as a key value, comparing the key value with the previous value, and exchanging if the previous value is larger;
s3, selecting three numbers as key values, comparing the key values forwards, and exchanging if the previous number is large;
s4, analogizing in sequence until all the action times are sequenced, and collecting the action timesSequentially increasing from left to right.
As a further improvement of the technical scheme, the output end of the role adaptation module is connected with the input end of the database storage module.
As a further improvement of the technical scheme, the output end of the role adaptation module is connected with a useless action classifying module, and the useless action classifying module is used for making unified standards for useless action contents of the same type.
As a further improvement of the technical scheme, the output end of the useless action classifying module is connected with the input end of the database storage module.
As a further improvement of the technical scheme, the output end of the action analysis module is connected with a repeated action filtering module, the input end of the repeated action filtering module is connected with the output end of the action frequency calculation module, the repeated action filtering module is used for analyzing different action structures corresponding to the same action content, and combining the occurrence times of the different action structures, formulating an action structure time threshold and filtering action structures lower than the action structure time threshold in advance.
Compared with the prior art, the invention has the beneficial effects that:
1. in the meta space remote interactive system based on motion capture, captured images are analyzed through the motion analysis module, corresponding motion contents of different captured images are judged, the probability of occurrence of each captured image is judged through the motion frequency calculation module according to the capture times of different captured images, the occurrence probability of each captured image in unit time is used as the behavior habit or the occupational habit of judging the person, then each character application scene of the meta space is determined through the character adaptation module, and corresponding preset captured images are selected according to the application scene, so that interactive adaptation of the meta space character and the person corresponding to reality can be realized, one-to-one adaptation is realized, a plurality of motion contents are prevented from adapting to single common motion, the occurrence of motion disorder of the character corresponding to the meta space is easily caused, and correct motion contents are difficult to determine.
2. In the meta-universe remote interactive system based on motion capture, a role adaptation module marks the type of motion as useless motion, the marked useless motion is fed back to a database storage module again, the database storage module eliminates the corresponding useless motion, and the storage space of the database storage module is increased.
3. In the meta-universe remote interactive system based on motion capture, marked useless actions are transmitted to a useless action classifying module through a role adapting module, the useless action classifying module is used for summarizing all useless actions, unified standards are formulated for useless action contents of the same type, for example, rest actions in the basketball playing process, such as sitting on the ground, lying on a seat and sitting on the seat, are not basketball actions, marked as useless actions, and when the actions are all rest actions, the rest actions are summarized through the useless action classifying module and classified into useless actions of the same type for later identification.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a flow chart of an operation analysis module according to the present invention;
fig. 3 is a flowchart of an operation frequency calculation module according to the present invention.
The meaning of each reference sign in the figure is:
10. motion state capturing module
20. An action analysis module; 210. an action structure recognition unit; 220. an environmental factor recognition unit; 230. an action content determination unit;
30. an operation frequency calculation module; 310. a unit time determination unit; 320. an integral action counting unit; 330. each action calculation unit;
40. a database storage module;
50. a role adaptation module;
60. a useless action classifying module;
70. and repeating the action filtering module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-3, a meta space remote interactive system based on motion capture is provided, which includes a motion state capturing module 10, the motion state capturing module 10 is used for capturing real human motion to generate captured images, an output end of the motion state capturing module 10 is connected with a motion analyzing module 20, the motion analyzing module 20 is used for analyzing the captured images, judging the corresponding motion content of different captured images, an output end of the motion analyzing module 20 is connected with a motion frequency calculating module 30, the motion frequency calculating module 30 is used for judging the occurrence probability of each captured image according to the capture times of different captured images, an output end of the motion frequency calculating module 30 is connected with a database storage module 40, an input end of the database storage module 40 is connected with an input end of the motion analyzing module 20, the database storage module 40 is used for storing each captured image and the corresponding motion content thereof, simultaneously storing the occurrence probability of each captured image, classifying the different captured images of the same motion content to generate the same type of image, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a predetermined captured image of the type, the output end of the database storage module 40 is connected with an adaptation module 50, and determining the character adaptation module 50 is applicable to the selected character as a scene according to the selected character, and the selected character is applicable to the selected character and the meta space.
When the device is specifically used, the motion state capturing module 10 captures real human motions, generates captured images, transmits the captured images to the motion analyzing module 20, the motion analyzing module 20 analyzes the captured images, judges the corresponding motion content of different captured images, for example, the motion of a person in the captured images is forward leg lifting, the upper body part of the person leans forward, meanwhile, the foot of the person is provided with a football, the motion content of the captured images is indicated to be football playing, the motion frequency calculating module 30 judges the occurrence probability of each captured image according to the capturing times of different captured images, the occurrence probability of the captured images in unit time is used as the behavior habit or occupation habit of judging the person, the fact that the person frequently acts in reality is indicated, the corresponding role in the universe needs to simulate the motion of the person frequently in reality, the database storage module 40 is used for storing each captured image and corresponding motion content thereof, storing the occurrence probability of each captured image, classifying different captured images of the same motion content, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a preset captured image of the type, such as a football playing motion, the forward leaning amplitude of a person is different, the forward lifting amplitude of the person is different, classifying the motions, judging the highest forward leaning amplitude and the highest forward lifting amplitude of the occurrence probability as the motion of the person playing football playing, the character adaptation module 50 determines each character application scene of the universe, and the corresponding preset captured image is selected according to the applicable scene and used as the selected action of the character, so that the interactive adaptation of the meta-universe character and the character corresponding to reality can be realized, the adaptation is one-to-one, the condition that a plurality of action contents adapt to a single common action is avoided, the action disorder of the meta-universe character corresponding to the meta-universe is easily caused, and the correct action contents are difficult to determine.
In addition, the motion analysis module 20 includes a motion structure recognition unit 210, the motion structure recognition unit 210 is used for determining a motion structure of a captured image, an output end of the motion structure recognition unit 210 is connected with an environmental factor recognition unit 220, the environmental factor recognition unit 220 is used for determining an environment where the motion structure of the captured image is currently located, an output end of the environmental factor recognition unit 220 is connected with a motion content determination unit 230, and the motion content determination unit 230 determines motion content represented by the motion structure of the captured image currently according to the motion structure of the captured image and the environment where the motion structure of the captured image is located. In specific use, the motion structure recognition unit 210 determines a motion structure of a captured image, for example, a limb motion of a person, generates motion structure information, and transmits the motion structure information to the environmental factor recognition unit 220, and the environmental factor recognition unit 220 determines an environment in which the motion structure of the captured image is currently located according to the motion structure information, for example, when the person lifts his hands upwards, it can be determined that the person is stretching his body, but when the person holds the ball structure with his hands, it indicates that the person is shooting, basketball, ball basket, and ball table are the environment in which the motion structure is located, and the motion content determination unit 230 determines the motion content represented by the motion structure of the captured image according to the motion structure of the captured image and the environment in which the motion structure of the captured image is located, thereby further improving the accuracy of motion content recognition.
Further, the operation frequency calculating module 30 includes a unit time determining unit 310, the unit time determining unit 310 is used for making a unit time and counting different operations in the unit time, the output end of the unit time determining unit 310 is connected with an overall operation counting unit 320, the overall operation counting unit 320 is used for determining the sum of all operations occurring in the unit time, the output end of the overall operation counting unit 320 is connected with each operation calculating unit 330, and each operation calculating unit 330 is used for calculating the occurrence times of each operation in the unit time. In specific use, the unit time determining unit 310 establishes a unit time, counts different actions in the unit time, the overall action counting unit 320 determines the sum of all actions occurring in the unit time, and each action calculating unit 330 calculates the occurrence times of each action in the unit time according to the sum of all actions occurring, thereby determining different action occurrence probabilities for the corresponding action content selection in the later period.
Still further, the action frequency calculation module 30 employs an insert ordering algorithm, which includes the following algorithm steps:
s1, determining the occurrence times of each action in unit time, and drawing up an action time set,/>To->Representing the number of times each action occurs;
s2, selecting a second number as a key value, comparing the key value with the previous value, and exchanging if the previous value is larger;
s3, selecting three numbers as key values, comparing the key values forwards, and exchanging if the previous number is large;
s4, analogizing in sequence until all the action times are sequenced, and collecting the action timesSequentially increasing from left to right.
Specifically, the output end of the character adapting module 50 is connected with the input end of the database storage module 40. When the system is specifically used, in the process of performing meta-space character and real character action adaptation, the adapted actions are different due to different scenes of different characters, for example, a basketball player only needs to adapt the basketball player to play basketball in the process of performing meta-space character adaptation, other actions do not need to be recorded, the character adaptation module 50 marks the type of actions as useless actions, the marked useless actions are fed back to the database storage module 40 again, the database storage module 40 rejects the corresponding useless actions, and the storage space of the database storage module 40 is increased.
In addition, the output end of the role adaptation module 50 is connected with a useless action classifying module 60, and the useless action classifying module 60 is used for making unified standards for useless action contents of the same type. In specific use, the character adaptation module 50 transmits marked useless actions to the useless action classifying module 60, the useless action classifying module 60 generalizes all useless actions, and makes unified standards for the useless actions of the same type, for example, rest actions in basketball playing, sitting on the ground, lying on a seat and sitting on the seat, all these are not basketball actions, marked as useless actions, and when all these actions are rest actions, the rest actions are generalized through the useless action classifying module 60 and classified into useless actions of the same type for later recognition.
Further, the output of the garbage classification module 60 is connected to the input of the database storage module 40. In specific use, the useless action classifying module 60 generalizes each useless action, establishes a unified standard for useless action content of the same type, determines useless action feature points of the same standard, for example, limbs are in a naturally-descending state in rest actions, and limb actions are feature points, when the database storage module 40 stores rest actions with the same feature points, the actions can be judged to be rest actions only by comparing the feature point information, and are also useless actions, and adaptation by the role adaptation module 50 is not needed, so that the adaptation workload is reduced, and the adaptation efficiency of roles corresponding to the universe is improved.
Still further, the output end of the motion analysis module 20 is connected with a repetitive motion filtering module 70, the input end of the repetitive motion filtering module 70 is connected with the output end of the motion frequency calculation module 30, the repetitive motion filtering module 70 is used for analyzing different motion structures corresponding to the same motion content, and combining the occurrence times of the different motion structures, formulating a threshold value of the number of times of the motion structures, and filtering the motion structures lower than the threshold value of the number of times of the motion structures in advance, so that the storage capacity of the database storage module 40 is reduced.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.