[go: up one dir, main page]

CN115793866B - Meta-universe remote interaction system based on motion capture - Google Patents

Meta-universe remote interaction system based on motion capture Download PDF

Info

Publication number
CN115793866B
CN115793866B CN202310104534.8A CN202310104534A CN115793866B CN 115793866 B CN115793866 B CN 115793866B CN 202310104534 A CN202310104534 A CN 202310104534A CN 115793866 B CN115793866 B CN 115793866B
Authority
CN
China
Prior art keywords
action
motion
module
unit
captured image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310104534.8A
Other languages
Chinese (zh)
Other versions
CN115793866A (en
Inventor
王亚刚
李元元
程思锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Feidie Virtual Reality Technology Co ltd
Original Assignee
Xi'an Feidie Virtual Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Feidie Virtual Reality Technology Co ltd filed Critical Xi'an Feidie Virtual Reality Technology Co ltd
Priority to CN202310104534.8A priority Critical patent/CN115793866B/en
Publication of CN115793866A publication Critical patent/CN115793866A/en
Application granted granted Critical
Publication of CN115793866B publication Critical patent/CN115793866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of remote interaction, in particular to a meta-universe remote interaction system based on motion capture. The system comprises an action analysis module, an action frequency calculation module and a role adaptation module. According to the invention, the captured images are analyzed through the action analysis module, the corresponding action contents of different captured images are judged, the occurrence probability of each captured image is judged through the action frequency calculation module according to the different capturing times of the captured images, the occurrence probability of the captured images in unit time is used as the action habit or the occupation habit for judging the character, then the character adaptation module is used for determining the applicable scene of each character of the universe, and the corresponding preset captured images are selected according to the applicable scene, so that the interactive adaptation of the character of the universe and the character corresponding to reality can be realized, the one-to-one adaptation is realized, the occurrence of a plurality of action contents is avoided, the action disorder of the character corresponding to the universe is easily caused, and the correct action contents are difficult to determine.

Description

Meta-universe remote interaction system based on motion capture
Technical Field
The invention relates to the technical field of remote interaction, in particular to a meta-universe remote interaction system based on motion capture.
Background
The metauniverse is essentially a real-world virtualization, digitizing process that requires extensive modification of content production, economic systems, user experience, and physical world content, etc. But the development of the metauniverse is progressive, is finally formed by continuous fusion and evolution of a plurality of tools and platforms under the support of shared infrastructure, standard and protocol, provides immersive experience based on an augmented reality technology, generates a real world mirror image based on a digital twin technology, builds an economic system based on a blockchain technology, integrates a virtual world with the real world closely on an economic system, a social system and an identity system, and allows each user to carry out content production and world editing.
In the process of performing the remote interaction of the meta-universe, the action state of the corresponding person needs to be captured in reality, then the action state of the person is converted into the character corresponding to the meta-universe, the interaction mode is often used in the game development process, such as basketball games, different characters in the basketball games correspond to the respective ball stars in reality, the playing actions of the signboards and the dribble modes of the characters can be copied to the corresponding characters in the basketball games, in the process of performing the interaction of the two characters, the subjectivity of the person in reality is strong, the action content of the same action content corresponds to different action structures in the process of capturing the action, and the action mode is more difficult to identify than the action mode of the ball star in common use, so that the action disorder of the character corresponding to the meta-universe is easily caused.
In order to address the above problems, there is a need for a meta-cosmic remote interactive system based on motion capture.
Disclosure of Invention
The invention aims to provide a meta-universe remote interaction system based on motion capture so as to solve the problems in the background technology.
In order to achieve the above object, a meta space remote interactive system based on motion capture is provided, which comprises a motion state capture module, wherein the motion state capture module is used for capturing real human motion to generate captured images, an output end of the motion state capture module is connected with a motion analysis module, the motion analysis module is used for analyzing the captured images and judging the corresponding motion content of different captured images, an output end of the motion analysis module is connected with a motion frequency calculation module, the motion frequency calculation module is used for judging the occurrence probability of each captured image according to the capture times of different captured images, an output end of the motion frequency calculation module is connected with a database storage module, an input end of the database storage module is connected with an input end of the motion analysis module, the database storage module is used for storing each captured image and the corresponding motion content thereof, simultaneously storing the occurrence probability of each captured image, classifying the different captured images of the same motion content to generate the same type of image, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a preset captured image of the type, and the output end of the database storage module is connected with an adaptation module, the input end of the database storage module is used for determining the appearance probability of each captured image of the character is suitable for a scene of a character, and the character is suitable for selecting a character and the character is suitable for a scene according to the selected character.
As a further improvement of the technical scheme, the motion analysis module comprises a motion structure identification unit, wherein the motion structure identification unit is used for determining a motion structure of a captured image, the output end of the motion structure identification unit is connected with an environmental factor identification unit, the environmental factor identification unit is used for determining the environment where the motion structure of the captured image is positioned currently, the output end of the environmental factor identification unit is connected with a motion content determination unit, and the motion content determination unit determines motion content represented by the motion structure of the captured image currently according to the motion structure of the captured image and the environment where the motion structure of the captured image is positioned.
As a further improvement of the technical scheme, the motion frequency calculation module comprises a unit time determination unit, wherein the unit time determination unit is used for making unit time and counting different motions in the unit time, the output end of the unit time determination unit is connected with an integral motion counting unit, the integral motion counting unit is used for determining the sum of all motions in the unit time, the output end of the integral motion counting unit is connected with each motion calculation unit, and each motion calculation unit is used for calculating the occurrence times of each motion in the unit time.
As a further improvement of the technical scheme, the action frequency calculation module adopts an insertion ordering algorithm, and the algorithm comprises the following steps:
s1, determining the occurrence times of each action in unit time, and drawing up an action time set,/>To->Representing the number of times each action occurs;
s2, selecting a second number as a key value, comparing the key value with the previous value, and exchanging if the previous value is larger;
s3, selecting three numbers as key values, comparing the key values forwards, and exchanging if the previous number is large;
s4, analogizing in sequence until all the action times are sequenced, and collecting the action timesSequentially increasing from left to right.
As a further improvement of the technical scheme, the output end of the role adaptation module is connected with the input end of the database storage module.
As a further improvement of the technical scheme, the output end of the role adaptation module is connected with a useless action classifying module, and the useless action classifying module is used for making unified standards for useless action contents of the same type.
As a further improvement of the technical scheme, the output end of the useless action classifying module is connected with the input end of the database storage module.
As a further improvement of the technical scheme, the output end of the action analysis module is connected with a repeated action filtering module, the input end of the repeated action filtering module is connected with the output end of the action frequency calculation module, the repeated action filtering module is used for analyzing different action structures corresponding to the same action content, and combining the occurrence times of the different action structures, formulating an action structure time threshold and filtering action structures lower than the action structure time threshold in advance.
Compared with the prior art, the invention has the beneficial effects that:
1. in the meta space remote interactive system based on motion capture, captured images are analyzed through the motion analysis module, corresponding motion contents of different captured images are judged, the probability of occurrence of each captured image is judged through the motion frequency calculation module according to the capture times of different captured images, the occurrence probability of each captured image in unit time is used as the behavior habit or the occupational habit of judging the person, then each character application scene of the meta space is determined through the character adaptation module, and corresponding preset captured images are selected according to the application scene, so that interactive adaptation of the meta space character and the person corresponding to reality can be realized, one-to-one adaptation is realized, a plurality of motion contents are prevented from adapting to single common motion, the occurrence of motion disorder of the character corresponding to the meta space is easily caused, and correct motion contents are difficult to determine.
2. In the meta-universe remote interactive system based on motion capture, a role adaptation module marks the type of motion as useless motion, the marked useless motion is fed back to a database storage module again, the database storage module eliminates the corresponding useless motion, and the storage space of the database storage module is increased.
3. In the meta-universe remote interactive system based on motion capture, marked useless actions are transmitted to a useless action classifying module through a role adapting module, the useless action classifying module is used for summarizing all useless actions, unified standards are formulated for useless action contents of the same type, for example, rest actions in the basketball playing process, such as sitting on the ground, lying on a seat and sitting on the seat, are not basketball actions, marked as useless actions, and when the actions are all rest actions, the rest actions are summarized through the useless action classifying module and classified into useless actions of the same type for later identification.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a flow chart of an operation analysis module according to the present invention;
fig. 3 is a flowchart of an operation frequency calculation module according to the present invention.
The meaning of each reference sign in the figure is:
10. motion state capturing module
20. An action analysis module; 210. an action structure recognition unit; 220. an environmental factor recognition unit; 230. an action content determination unit;
30. an operation frequency calculation module; 310. a unit time determination unit; 320. an integral action counting unit; 330. each action calculation unit;
40. a database storage module;
50. a role adaptation module;
60. a useless action classifying module;
70. and repeating the action filtering module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-3, a meta space remote interactive system based on motion capture is provided, which includes a motion state capturing module 10, the motion state capturing module 10 is used for capturing real human motion to generate captured images, an output end of the motion state capturing module 10 is connected with a motion analyzing module 20, the motion analyzing module 20 is used for analyzing the captured images, judging the corresponding motion content of different captured images, an output end of the motion analyzing module 20 is connected with a motion frequency calculating module 30, the motion frequency calculating module 30 is used for judging the occurrence probability of each captured image according to the capture times of different captured images, an output end of the motion frequency calculating module 30 is connected with a database storage module 40, an input end of the database storage module 40 is connected with an input end of the motion analyzing module 20, the database storage module 40 is used for storing each captured image and the corresponding motion content thereof, simultaneously storing the occurrence probability of each captured image, classifying the different captured images of the same motion content to generate the same type of image, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a predetermined captured image of the type, the output end of the database storage module 40 is connected with an adaptation module 50, and determining the character adaptation module 50 is applicable to the selected character as a scene according to the selected character, and the selected character is applicable to the selected character and the meta space.
When the device is specifically used, the motion state capturing module 10 captures real human motions, generates captured images, transmits the captured images to the motion analyzing module 20, the motion analyzing module 20 analyzes the captured images, judges the corresponding motion content of different captured images, for example, the motion of a person in the captured images is forward leg lifting, the upper body part of the person leans forward, meanwhile, the foot of the person is provided with a football, the motion content of the captured images is indicated to be football playing, the motion frequency calculating module 30 judges the occurrence probability of each captured image according to the capturing times of different captured images, the occurrence probability of the captured images in unit time is used as the behavior habit or occupation habit of judging the person, the fact that the person frequently acts in reality is indicated, the corresponding role in the universe needs to simulate the motion of the person frequently in reality, the database storage module 40 is used for storing each captured image and corresponding motion content thereof, storing the occurrence probability of each captured image, classifying different captured images of the same motion content, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as a preset captured image of the type, such as a football playing motion, the forward leaning amplitude of a person is different, the forward lifting amplitude of the person is different, classifying the motions, judging the highest forward leaning amplitude and the highest forward lifting amplitude of the occurrence probability as the motion of the person playing football playing, the character adaptation module 50 determines each character application scene of the universe, and the corresponding preset captured image is selected according to the applicable scene and used as the selected action of the character, so that the interactive adaptation of the meta-universe character and the character corresponding to reality can be realized, the adaptation is one-to-one, the condition that a plurality of action contents adapt to a single common action is avoided, the action disorder of the meta-universe character corresponding to the meta-universe is easily caused, and the correct action contents are difficult to determine.
In addition, the motion analysis module 20 includes a motion structure recognition unit 210, the motion structure recognition unit 210 is used for determining a motion structure of a captured image, an output end of the motion structure recognition unit 210 is connected with an environmental factor recognition unit 220, the environmental factor recognition unit 220 is used for determining an environment where the motion structure of the captured image is currently located, an output end of the environmental factor recognition unit 220 is connected with a motion content determination unit 230, and the motion content determination unit 230 determines motion content represented by the motion structure of the captured image currently according to the motion structure of the captured image and the environment where the motion structure of the captured image is located. In specific use, the motion structure recognition unit 210 determines a motion structure of a captured image, for example, a limb motion of a person, generates motion structure information, and transmits the motion structure information to the environmental factor recognition unit 220, and the environmental factor recognition unit 220 determines an environment in which the motion structure of the captured image is currently located according to the motion structure information, for example, when the person lifts his hands upwards, it can be determined that the person is stretching his body, but when the person holds the ball structure with his hands, it indicates that the person is shooting, basketball, ball basket, and ball table are the environment in which the motion structure is located, and the motion content determination unit 230 determines the motion content represented by the motion structure of the captured image according to the motion structure of the captured image and the environment in which the motion structure of the captured image is located, thereby further improving the accuracy of motion content recognition.
Further, the operation frequency calculating module 30 includes a unit time determining unit 310, the unit time determining unit 310 is used for making a unit time and counting different operations in the unit time, the output end of the unit time determining unit 310 is connected with an overall operation counting unit 320, the overall operation counting unit 320 is used for determining the sum of all operations occurring in the unit time, the output end of the overall operation counting unit 320 is connected with each operation calculating unit 330, and each operation calculating unit 330 is used for calculating the occurrence times of each operation in the unit time. In specific use, the unit time determining unit 310 establishes a unit time, counts different actions in the unit time, the overall action counting unit 320 determines the sum of all actions occurring in the unit time, and each action calculating unit 330 calculates the occurrence times of each action in the unit time according to the sum of all actions occurring, thereby determining different action occurrence probabilities for the corresponding action content selection in the later period.
Still further, the action frequency calculation module 30 employs an insert ordering algorithm, which includes the following algorithm steps:
s1, determining the occurrence times of each action in unit time, and drawing up an action time set,/>To->Representing the number of times each action occurs;
s2, selecting a second number as a key value, comparing the key value with the previous value, and exchanging if the previous value is larger;
s3, selecting three numbers as key values, comparing the key values forwards, and exchanging if the previous number is large;
s4, analogizing in sequence until all the action times are sequenced, and collecting the action timesSequentially increasing from left to right.
Specifically, the output end of the character adapting module 50 is connected with the input end of the database storage module 40. When the system is specifically used, in the process of performing meta-space character and real character action adaptation, the adapted actions are different due to different scenes of different characters, for example, a basketball player only needs to adapt the basketball player to play basketball in the process of performing meta-space character adaptation, other actions do not need to be recorded, the character adaptation module 50 marks the type of actions as useless actions, the marked useless actions are fed back to the database storage module 40 again, the database storage module 40 rejects the corresponding useless actions, and the storage space of the database storage module 40 is increased.
In addition, the output end of the role adaptation module 50 is connected with a useless action classifying module 60, and the useless action classifying module 60 is used for making unified standards for useless action contents of the same type. In specific use, the character adaptation module 50 transmits marked useless actions to the useless action classifying module 60, the useless action classifying module 60 generalizes all useless actions, and makes unified standards for the useless actions of the same type, for example, rest actions in basketball playing, sitting on the ground, lying on a seat and sitting on the seat, all these are not basketball actions, marked as useless actions, and when all these actions are rest actions, the rest actions are generalized through the useless action classifying module 60 and classified into useless actions of the same type for later recognition.
Further, the output of the garbage classification module 60 is connected to the input of the database storage module 40. In specific use, the useless action classifying module 60 generalizes each useless action, establishes a unified standard for useless action content of the same type, determines useless action feature points of the same standard, for example, limbs are in a naturally-descending state in rest actions, and limb actions are feature points, when the database storage module 40 stores rest actions with the same feature points, the actions can be judged to be rest actions only by comparing the feature point information, and are also useless actions, and adaptation by the role adaptation module 50 is not needed, so that the adaptation workload is reduced, and the adaptation efficiency of roles corresponding to the universe is improved.
Still further, the output end of the motion analysis module 20 is connected with a repetitive motion filtering module 70, the input end of the repetitive motion filtering module 70 is connected with the output end of the motion frequency calculation module 30, the repetitive motion filtering module 70 is used for analyzing different motion structures corresponding to the same motion content, and combining the occurrence times of the different motion structures, formulating a threshold value of the number of times of the motion structures, and filtering the motion structures lower than the threshold value of the number of times of the motion structures in advance, so that the storage capacity of the database storage module 40 is reduced.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. The utility model provides a meta space remote interactive system based on motion capture, includes motion state capture module (10), motion state capture module (10) are used for catching reality manual motion, generate and catch image, its characterized in that: the motion state capturing module (10) is connected with the motion analyzing module (20) at the output end, the motion analyzing module (20) is used for analyzing captured images and judging corresponding motion contents of different captured images, the motion analyzing module (20) is connected with the motion frequency calculating module (30), the motion frequency calculating module (30) is used for judging the occurrence probability of each captured image according to different capturing times of the captured images, the motion frequency calculating module (30) is connected with the database storage module (40) at the output end, the input end of the database storage module (40) is connected with the input end of the motion analyzing module (20), the database storage module (40) is used for storing each captured image and corresponding motion content thereof, storing the occurrence probability of each captured image, classifying different captured images of the same motion content, generating the same type of images, comparing the occurrence probability of the captured images of the type, selecting the captured image with the highest probability as the preset captured image of the type, the database storage module (40) is connected with the role storage module (50), the role storage module (50) is connected with the role storage module, and the role storage module (50) is used for determining the role storage module is applicable to the selected role storage module as the corresponding role-applicable scene according to the selected role-space-state;
the motion analysis module (20) comprises a motion structure identification unit (210), wherein the motion structure identification unit (210) is used for determining a motion structure of a captured image, an environment factor identification unit (220) is connected to the output end of the motion structure identification unit (210), the environment factor identification unit (220) is used for determining the environment where the motion structure of the captured image is located, a motion content determination unit (230) is connected to the output end of the environment factor identification unit (220), and the motion content determination unit (230) determines motion content represented by the motion structure of the captured image according to the motion structure of the captured image and the environment where the motion structure of the captured image is located;
the action frequency calculation module (30) comprises a unit time determination unit (310), wherein the unit time determination unit (310) is used for making unit time and counting different actions in the unit time, the output end of the unit time determination unit (310) is connected with an integral action counting unit (320), the integral action counting unit (320) is used for determining the sum of all actions in the unit time, the output end of the integral action counting unit (320) is connected with each action calculation unit (330), and each action calculation unit (330) is used for calculating the occurrence times of each action in the unit time;
the action frequency calculation module (30) adopts an insertion ordering algorithm, and the algorithm comprises the following steps:
s1, determining the occurrence times of each action in unit time, and drawing up an action time set,/>To->Representing the number of times each action occurs;
s2, selecting a second number as a key value, comparing the key value with the previous value, and exchanging if the previous value is larger;
s3, selecting three numbers as key values, comparing the key values forwards, and exchanging if the previous number is large;
s4, analogizing in sequence until all the action times are sequenced, and collecting the action timesSequentially increasing from left to right;
the method comprises the steps that the output end of an action analysis module (20) is connected with a repeated action filtering module (70), the input end of the repeated action filtering module (70) is connected with the output end of an action frequency calculation module (30), the repeated action filtering module (70) is used for analyzing different action structures corresponding to the same action content, and combining the occurrence times of different action structures to formulate an action structure time threshold and pre-filter action structures lower than the action structure time threshold.
2. The motion capture-based meta-universe remote interaction system of claim 1, wherein: the output end of the role adaptation module (50) is connected with the input end of the database storage module (40).
3. The motion capture-based meta-universe remote interaction system of claim 2, wherein: the output end of the role adaptation module (50) is connected with a useless action classifying module (60), and the useless action classifying module (60) is used for making unified standards for useless action contents of the same type.
4. The motion capture-based meta-universe remote interaction system of claim 3, wherein: the output end of the useless action classifying module (60) is connected with the input end of the database storage module (40).
CN202310104534.8A 2023-02-13 2023-02-13 Meta-universe remote interaction system based on motion capture Active CN115793866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310104534.8A CN115793866B (en) 2023-02-13 2023-02-13 Meta-universe remote interaction system based on motion capture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310104534.8A CN115793866B (en) 2023-02-13 2023-02-13 Meta-universe remote interaction system based on motion capture

Publications (2)

Publication Number Publication Date
CN115793866A CN115793866A (en) 2023-03-14
CN115793866B true CN115793866B (en) 2023-07-28

Family

ID=85430995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310104534.8A Active CN115793866B (en) 2023-02-13 2023-02-13 Meta-universe remote interaction system based on motion capture

Country Status (1)

Country Link
CN (1) CN115793866B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625869A (en) * 2021-07-15 2021-11-09 北京易智时代数字科技有限公司 Large-space multi-person interactive cloud rendering system
CN115660576A (en) * 2022-10-13 2023-01-31 科大乾延科技有限公司 Meta-universe conference information acquisition method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0816986B1 (en) * 1996-07-03 2006-09-06 Hitachi, Ltd. System for recognizing motions
US8665326B2 (en) * 2009-01-30 2014-03-04 Olympus Corporation Scene-change detecting device, computer readable storage medium storing scene-change detection program, and scene-change detecting method
CN109765998B (en) * 2018-12-07 2020-10-30 北京诺亦腾科技有限公司 Motion estimation method, device and storage medium based on VR and motion capture
CN114967937B (en) * 2022-08-03 2022-09-30 环球数科集团有限公司 Virtual human motion generation method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625869A (en) * 2021-07-15 2021-11-09 北京易智时代数字科技有限公司 Large-space multi-person interactive cloud rendering system
CN115660576A (en) * 2022-10-13 2023-01-31 科大乾延科技有限公司 Meta-universe conference information acquisition method

Also Published As

Publication number Publication date
CN115793866A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN104618803B (en) Information-pushing method, device, terminal and server
CN110298220B (en) Action video live broadcast method, system, electronic equipment and storage medium
CN112183588B (en) Video processing method and device, electronic device and storage medium
CN112668492A (en) Behavior identification method for self-supervised learning and skeletal information
CN112232258A (en) Information processing method and device and computer readable storage medium
CN116977457A (en) Data processing method, device and computer readable storage medium
CN115793866B (en) Meta-universe remote interaction system based on motion capture
CN110728604B (en) Analysis method and device
CN112973110A (en) Cloud game control method and device, network television and computer readable storage medium
CN115880782B (en) Signature action recognition positioning method based on AI, recognition training method and system
CN112818801B (en) Motion counting method, recognition device, recognition system and storage medium
CN116233556A (en) Video pushing method and device, storage medium and electronic equipment
KR102591325B1 (en) Apparatus and Method for Estimating Human Pose
CN117808934A (en) Data processing method and related equipment
CN113989725A (en) Goal segment classification method based on neural network
CN113222178A (en) Model training method, user interface generation method, device and storage medium
CN114119726B (en) Method and device for improving picture description effect
CN115810099B (en) Image fusion device for virtual immersion type depression treatment system
CN119005815B (en) Educational place management method and system based on monitoring big data
CN117994846B (en) A lightweight sign language recognition method, system, device and medium
CN118196579B (en) Multimedia content management and control optimization method based on target recognition
JP6945693B2 (en) Video playback device, video playback method, and video distribution system
CN109446999B (en) Rapid sensing system and method for dynamic human body movement based on statistical calculation
CN118733171A (en) Image display method, device, computing equipment, storage medium and program product
JP2023036760A (en) Video reproducing apparatus, video reproducing method, and video distribution system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 510000

Applicant after: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: 518000 3311, Floor 3, Building 1, Aerospace Building, No. 51, Gaoxin South 9th Road, High tech Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong

Applicant before: Shenzhen FEIDIE Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20230628

Address after: 710000 Building D, National Digital Publishing Base, No. 996 Tiangu 7th Road, Yuhua Street Office, High tech Zone, Xi'an City, Shaanxi Province

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 510000

Applicant before: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant