Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and specific preferred embodiments.
As shown in FIG. 1, the motion capture-based virtual reality integrated system comprises an inertial motion capture device, an indoor positioning device, a virtual reality device, a data glove device, an electronic simulation gun device and a backpack computer device. The inertial motion capturing device, the indoor positioning device, the virtual reality device, the data glove device and the electronic simulation gun device are all in wireless connection with the backpack computer device. Wireless connectivity communication means include, but are not limited to, bluetooth, zegibee, WIFI, 2.4Ghz communication.
The inertial motion capture device comprises a plurality of motion capture modules which can be fixed on a human body. The number of the motion capture modules can be arbitrarily selected according to the situation, and can be 3, 6, 9, 11, 15, 17 or the like.
When the number of the motion capture modules is 3, the 3 motion capture modules are respectively fixed at three different positions of a user through binding belts or special dynamic capture clothes, and the three different positions are preferably: 1. head, torso and buttocks; 2. head, one of the double upper arms (left upper arm and right upper arm), and one of the double forearms (left forearm and right forearm).
When the number of motion capture modules is 6, the 6 motion capture modules are preferably fixed to the head, the torso, the buttocks, the legs, the feet (left and right), one of the upper arms, and one of the forearms, respectively, or to the head, the torso, the buttocks, one of the upper arms, one of the forearms, and one of the hands (left and right), respectively, by straps or professional motion capture clothing, respectively.
When the number of motion capture modules is 9, the 9 motion capture modules are preferably fixed to one of the head, torso, buttocks, double thighs, double calves, double upper arms and one of the double forearms, respectively, or to the head, torso, buttocks, double thighs, double calves, double upper arms and double forearms, respectively, by straps or professional motion capture clothing.
When the number of the motion capture modules is 11, the 11 motion capture modules are preferably fixed on the head, the trunk, the buttocks, the two thighs, the two calves, one of the two feet, one of the two upper arms and one of the two forearms, or respectively fixed on the head, the trunk, the buttocks, the two thighs, the two calves, the two upper arms and the two forearms, by using a binding belt or professional motion capture clothes.
When the number of motion capture modules is 15, the motion capture modules are preferably fixed to the head, trunk, buttocks, both thighs, both calves, both feet, both upper arms, both forearms, and both hands, respectively.
When the number of motion capture modules is 17, the motion capture modules are preferably fixed to the head, trunk, buttocks, legs, feet, upper arms, forearms, hands, and shoulders, respectively.
Each of the motion capture modules described above includes a motion capture sensor.
The motion capture sensor includes a three-axis MEMS acceleration sensor, a three-axis MEMS angular velocity sensor (also known as a gyroscope sensor), a three-axis MEMS magnetometer (also known as an electronic compass sensor), a data filtering sensor, and a microprocessor.
The triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetic force are respectively used for measuring acceleration signals, angular velocity signals and geomagnetic signals.
The triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer are all connected with the data filtering sensor, and the data filtering sensor is also connected with the microprocessor.
The data filtering sensor can carry out primary filtering on data detected by the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer and then transmit the data to the microprocessors, and a Kalman filter II is arranged in each microprocessor.
The microprocessor includes, but is not limited to, MCU, DSP or FPGA, preferably of the type NXP-LPC13xx. The microprocessor NXP-LPC13xx is communicated with the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer through communication modes such as SPI (serial peripheral interface), IIC (two-wire serial bus), USART (serial port) and the like.
The motion capture sensor can collect skeleton gesture data of a human body contact part and perform displacement correction on the collected skeleton gesture data.
The working process of the motion capture sensor is as follows:
the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer are used for respectively acquiring acceleration, angular velocity and geomagnetic field intensity of a human body contact part.
The data filtering sensor performs primary filtering processing on the collected acceleration, angular velocity and geomagnetic field intensity data, and then transmits the acceleration, angular velocity and geomagnetic field intensity signal data in a normal range to the microprocessor.
And the microprocessor NXP-LPC13xx receives the acceleration signal, the angular velocity signal and the geomagnetic intensity signal, generates quaternion or Euler angle, and a Kalman filter built in the microprocessor carries out deep filtering and fusion on the received acceleration, angular velocity and geomagnetic field intensity data by adopting a Kalman filtering algorithm and processes the data into user body posture information.
The microprocessor also analyzes various error sources of the geomagnetic sensor while deep filtering and fusion are carried out, an ellipsoid error model of the geomagnetic sensor in a complete form is established, an ellipsoid model coefficient is obtained through fitting by a least square estimation method, the geomagnetic sensor error matrix and an offset vector are deduced by utilizing the ellipsoid model coefficient, and finally displacement correction and correction are carried out on skeleton posture data output under the magnetic environment of the geomagnetic sensor.
Finally, the microprocessor transmits the corrected skeleton posture data and corrected user body posture information (including azimuth information, euler angles, quaternion information and the like) to the backpack type computer device in a wireless or wired mode.
The Kalman filtering algorithm is a recursive autoregressive data processing algorithm, is a mature prior art, and is realized through 5 common formulas. The Kalman filtering algorithm estimates the process state through a feedback control method, and circularly corrects the state result output every time until the optimal state process data are obtained. Kalman filtering algorithm can be divided into in two cycles: the time updating and measurement updating process, the former is responsible for timely and forward calculating the estimated value of the current state variable and the error covariance to construct the prior estimation of the next time state; the latter combines a priori estimates and measured variables to construct an improved posterior estimate; the time update process can be regarded as a pre-estimation process, the measurement update process can be regarded as a correction process, and the whole estimation algorithm is essentially a pre-estimation-correction algorithm of a numerical solution.
Through the Kalman filtering algorithm and the data process processing, the method can be used in a certain range of iron and weak magnetic field environments, when objects with weak magnetic fields such as mobile phones and the like are close to the sensor, signal acquisition of the geomagnetic sensor is not affected, and gesture data can be normally used.
The motion capture module has the advantages of small volume, light weight and strong endurance, does not influence the motion of a human body when being bound on the human body, has high sampling frequency, and collects complex and high-speed motion; the motion capture module is flexible in configuration, and can capture local motion and whole-body motion; the motion capture is not limited by the field, and the capture effect is not influenced by the shielding of objects in the real environment; the cost of motion capture systems is relatively low.
The data glove device comprises a glove body and a plurality of hand joint gesture sensors arranged in the glove body.
The number of the hand joint posture sensors can be arbitrarily selected according to the situation, and can be 6, 10 or 15, etc.
In one embodiment, the number of joint sensors is 6, and the joint sensors are respectively fixed on the back of the hand by 1 finger and 5 fingers.
In one embodiment, the number of the joint sensors is 10, 1 is fixed on the back of the hand through the glove respectively, 1 is the thumb, and the other four fingers are respectively fixed with 2 sensors.
In one embodiment, the number of the joint sensors is 15, 1 is fixed on the back of the hand through the glove respectively, 2 is thumb, and the other four fingers are respectively fixed with 3 sensors.
The hand joint gesture sensor also comprises a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor. The components of the hand joint posture sensor are the same as the motion capture sensor, and the connection relationship and the working process between the components are basically similar, and will not be described in detail here.
The electronic simulation gun devices comprise electronic simulation guns, electronic simulation gun data acquisition sensors, wireless communication modules, power supplies and the like, wherein the electronic simulation gun data acquisition sensors are arranged in the electronic simulation guns.
The number of the electronic simulation guns can be set to be 1, 2 or 3 according to the number of users, one user wears one electronic simulation gun, and actions such as gun changing, loading, shooting and the like are simulated in an open space.
The electronic simulation gun is preferably manufactured according to the proportion of 1:1 of a real gun, and the appearance, the weight and the operation mode are completely designed according to the actual installation, so that the experience is high.
The electronic simulation gun data acquisition sensor comprises an electronic gun attitude sensor and an electronic gun operation sensor.
The electronic gun operation sensor is one or a combination of a plurality of shooting sensors, cartridge clip sensors, loading sensors, insurance and the like.
The electronic gun attitude sensor also comprises a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor. The components of the electronic gun posture sensor are the same as the motion capture sensor, and the connection relationship and the working process between the components are basically similar, and will not be described in detail here.
However, the microprocessor in the gun attitude sensor is also connected to the gun operation sensor.
Acceleration, angular velocity and geomagnetic field intensity are measured through an electron gun attitude sensor, the state of the gun is acquired through an electron gun operation sensor, data are input into a microprocessor for processing, quaternion or Euler angles of all nodes are output, signal data are transmitted to a backpack computer in a wired or wireless mode through a data analysis and reduction algorithm, and the computer is connected to a virtual reality device through a data interface to restore the state in real time.
The user holds the electronic simulation gun by hand and braces, and the operations of loading, changing the bullet, shooting and the like of the real gun are simulated by operating the trigger, loading or the cartridge clip and the like. The shooting sensor, the cartridge clip sensor, the loading sensor, the insurance and the like can detect the operation states of shooting, bullet changing, loading, insurance and the like in real time, and transmit the operation state data to the microprocessor, and the microprocessor wirelessly transmits the operation state data to the knapsack computer device for data processing, and maps the state of the gun in the virtual world in the virtual reality device.
The virtual reality device comprises a VR wearing device and an environment feedback device.
Wherein, VR wears the device and is VR helmet or VR glasses etc..
The environmental feedback device is one or a combination of more of an audio system, a controllable running machine, an electrode stimulation patch, a force feedback coat/shoe and the like. The sound effect system is a sound box for feeding back the audio signal to the human ear; the force feedback coat/shoe is applied to certain parts of the human body through a certain driver to generate certain action, namely, the force feedback coat/shoe is used for feeding back force feedback signals to the human body; the electrode stimulation patches are electrode patches, the electrode patches are attached to the skin, and then voltage is applied between the motor patches, so that stimulation effect is generated on nerves or muscles between the two electrode patches, namely, tactile signals are fed back to a human body.
The environment feedback device is worn on a target user, is fixed by wearing the helmet or the bandage mode, is preferably connected to the backpack computer device in a wireless mode, generates a 3D virtual environment and a virtual character aiming at the user, maps the received position information, body posture information, finger posture information and electronic simulation gun state information into the virtual character and the environment, and simultaneously transmits corresponding video and audio signals to devices such as video, audio and pressure of the virtual reality glasses device through different signal interfaces according to interaction between the virtual character and the environment.
The indoor positioning device is a UWB indoor positioning system. UWB indoor positioning systems are state of the art, see in particular the patent of application number CN201520817538.1 filed earlier by the applicant.
The UWB indoor positioning system comprises a plurality of positioning anchor nodes, a plurality of mobile tags, a synchronizer and a server; the positioning anchor nodes are fixedly arranged indoors, the mobile tags are worn on each target user, and data transmission is carried out between the mobile tags and the positioning anchor nodes through UWB; the time correction communication is carried out between the synchronizer and each positioning anchor point, so that the time synchronization between the positioning anchor points is realized; the server is provided with infinite access nodes, and each positioning anchor node transmits data with the server through the infinite access nodes.
In specific implementation, a plurality of positioning anchor nodes can be arranged according to specific field areas, and a user wears the mobile tag. The number of mobile tags may be 1, 2 or 3, etc., preferably by means of a strap or professional binding garment to the head, chest, wrist etc. of the user, walking, activity in the field where the anchor node is located.
The dynamic precision positioning of the UWB technology time target user in the indoor environment is utilized, the system is low in power consumption, the system design with low complexity is easier to operate, wiring is not needed, the application efficiency is improved, and the device outputs the position information of the target user.
The positioning characteristic of the UWB indoor positioning system is that accumulated errors cannot be generated after long-time use. However, the device has a certain positioning error range, the error range is +/-20 cm, the device belongs to small-range positioning in the real-time use process, the displacement is not smooth enough, the device cannot be directly used for replacing displacement data in the motion capture gesture, and the situation that the gesture is not matched with the displacement can be generated by direct replacement.
As described in the background art, in the three-axis MEMS angular velocity sensor according to the present invention, i.e., the gyroscope, when the angular velocity is integrated in time, although the data filtering sensor is used for the primary filtering and the kalman filter in the microprocessor is used for the deep filtering, the integrated accumulated error still gradually increases with time, and the measured motion gesture data still has a certain deviation from the actual data.
The invention further solves the problem of deviation between the measured motion gesture data and the actual data by adopting the following method.
1. Posture recombination:
the Kalman filter II is used in the UWB indoor positioning system and the backpack computer device, and can fuse positioning data in the UWB indoor positioning system with gesture data measured by an inertial motion capture sensor (comprising a hand joint gesture sensor, an electron gun gesture sensor and the like), namely, the absolute coordinate position is fused with the coordinates in the motion capture, and the accumulated error of gyroscope integration in the motion capture device is corrected, so that a more real position effect is provided for a user.
A method for fusing indoor positioning data and motion capture data comprises the following steps:
step 1, motion capture data acquisition: acquiring motion capture data of a human body through a motion capture sensor in a virtual reality integrated system; the virtual reality integrated system is provided with an inertial motion capture device, the inertial motion capture device comprises a plurality of motion capture sensors which can be fixed on a human body, and the motion capture sensors can automatically capture and collect motion data of a human body contact part, namely skeleton gesture data.
Step 2, indoor positioning data acquisition: indoor positioning data are obtained through a UWB indoor positioning system.
Step 3, obtaining fusion displacement: and (3) fusing the motion capture data acquired in the step (1) with the indoor positioning data acquired in the step (2) by adopting a Kalman filtering algorithm to obtain fusion displacement.
Assuming that the coordinate data in the motion capture data collected in the step 1 and the indoor positioning data collected in the step 2 are both a set of two-dimensional coordinate points (x, y), wherein x and y respectively represent the abscissa and ordinate of the point, the specific acquisition method of the fusion displacement comprises the following steps:
step 31, a state equation is established: taking the displacement increment of the motion capture data acquired in the step 1 as a state quantity, and establishing a state equation as follows:
In the above, vectors
A is the identity matrix ++A for a priori estimation of motion capture data at time k>
Posterior estimation of motion capture data for time k-1, +.>
Displacement increment, w, of motion capture data acquired in step 1
k As covariance matrix of process noise, which is experimentally measured, is an adjustable parameter, whose preferred matrix is +.>
The matrix parameter ranges from 0 to 500.
Step 32, establishing an observation equation: taking the indoor positioning data acquired in the step 2 as observed quantity, and establishing an observation equation shown as follows:
in the above, vectors
For posterior estimation of the indoor positioning data at time k, C is an observation matrix, preferably an identity matrix +.>
Input coordinate data representing the UWB indoor positioning system; r is (r)
k For the observation of the noise matrix, which is experimentally measured, the preferred matrix is +.>
The matrix parameters range from 0 to 100.
Step 33, calculating fusion displacement: and solving the state equation established in the step 31 and the observation equation established in the step 32 to obtain the fusion displacement.
Step 4, displacement deviation correction: analyzing the posture data of each bone captured by each motion capture sensor in the virtual reality integrated system, and calculating the relative displacement coordinates of each bone; and (3) carrying out displacement correction on each motion capture sensor in the virtual reality integrated system according to the fusion displacement and skeleton relative displacement coordinates obtained in the step (3) to form gesture recombination displacement.
When the displacement is corrected, in order to enable the fusion displacement obtained in the step 3 to be matched with the original skeleton gesture, judging based on whether a human body has a landing place or not; under the condition that a human body lands, calculating the positions of bones of the whole body by taking the landing point as an origin; if no new landing point is generated in the correction process, keeping the original point unchanged; if a new landing point is generated in the correction process, the origin becomes the fusion displacement at the current moment.
Under the condition that a human body lands, calculating the positions of bones of the whole body by using a pose matrix by taking a landing point as an origin; wherein the pose matrix T is represented as follows:
in the above formula, T represents a pose matrix, n represents a normal vector, O represents a direction vector, a represents a proximity vector, P represents a translation vector, R represents a rotation matrix, P represents a position matrix, O represents a perspective matrix, and I represents proportional conversion; x, y, z represent three coordinate axis directions.
The rotation matrix represented by R is derived from the gesture data of the motion capture sensor.
The gesture data of the motion capture sensor is quaternion: q= (w, x, y, z)
The conversion formula of the quaternion and the rotation matrix is as follows:
the position matrix denoted by P is initially
The latter can be obtained by combining the pose matrix T with a bone parameter matrix (e.g. right thigh bone parameter matrix +. >
The bone parameter matrix is a fixed parameter) are multiplied.
The O matrix and the I matrix are fixed parameter matrices.
Step 5, forming a final output displacement: and 3, the fusion displacement obtained in the step 3 and the gesture recombination displacement formed in the step 4 form a preliminary output displacement, the preliminary output displacement is subjected to Kalman filtering, and a flash point generated in the displacement correction process is removed, so that a smooth final output displacement of each motion capture sensor is formed.
When the preliminary output displacement is subjected to Kalman filtering, a Kalman filtering state equation is as follows:
in the above-mentioned method, the step of,
preliminarily outputting the state quantity of displacement for the moment k;
Is->
Is the first derivative of (a);
Is a state matrix; t is t
s The sampling frequency of the motion capture sensor is represented as a fixed parameter;
And outputting the state quantity of displacement for the moment k-1.
Kalman filtering observation equation:
in the above-mentioned method, the step of,
for a posterior estimation of preliminary output displacement, +.>
Is->
Is the first derivative of (a); c is an observation matrix, taking
Is the observed quantity.
The system analyzes the bone posture data (including the data collected by the motion capture sensor, the hand joint posture sensor and the electron gun posture sensor) of the motion capture system, and calculates the relative displacement coordinates of each bone. And carrying out displacement correction on the dynamic capture system through fusion of the intra-chamber internal displacement and the relative coordinates of bones.
2. And (3) output filtering:
the output displacement consists of the displacement of gesture recombination and the displacement of Kalman fusion, and because the displacement of gesture recombination and the displacement of Kalman fusion have deviation (two sets of incoherent systems have the deviation necessarily), when the landing point is generated, a flashover point is generated when the Kalman fusion displacement replaces a new origin. The existence of the flash point can cause the final effect to generate the phenomenon of the flash of the character, and the flash point needs to be eliminated or smoothed. The output Kalman filtering is the smoothing of the flash point.
The knapsack type computer device comprises a computer device, a binding belt, a knapsack, a reinforcing belt, a damping device, a buffering device and the like.
The computer device includes: the system comprises a computer host, a standard video interface, a standard audio interface, a standard USB3.0 interface, a wireless communication module, a battery power supply system, a charging system and a voltage conversion circuit.
The backpack computer device is internally provided with a Kalman filter I and a simulation software system. The backpack computer is preferably connected to all of the microprocessors described above by wireless.
The simulation software system is a mature software system and can be directly purchased and used, and the application is not described in detail.
The backpack type computer device is preferably connected with the inertia motion capturing device, the indoor positioning device, the virtual reality glasses device, the data glove device and the electronic simulation gun device in a wireless mode, signals of the devices are input to the backpack type computer device, a Kalman filter is used for fusing gesture output data of motion capturing with positioned output data by adopting a data fusion algorithm with a recursive autoregressive filtering function, and various signals of the inertia motion capturing device, the indoor positioning device, the data glove device and the electronic simulation gun device in the backpack type computer device are used for generating a 3D virtual environment and a virtual role for a user in the backpack type computer device, and feedback, display and realization are carried out in the virtual reality device. The 3D virtual environment includes a virtual scene, one or more user-corresponding characters, and a series of virtual objects. The three can interact with each other, and an effect identical to the real world can be generated, and the effect accords with objective rules.
The system adopts an inertial sensor technology, wears an inertial sensor module on a body to capture human body action gesture data in real time, uploads the gesture data to an upper computer through a wireless communication technology, restores human body gestures in real time, integrates a knapsack computer technology, a virtual reality glasses technology, an indoor positioning technology, an electronic simulation gun technology, a data glove technology, an ergonomic technology, a data fusion technology and a geomagnetic anti-interference technology, and integrates a virtual reality system.
The virtual composite system of the present invention is described in detail below in conjunction with specific examples.
Assume that in this embodiment, a user performs individual combat training or individual tactical collaborative combat in a virtual environment. The whole body of the user is bound with 17 motion capture modules, and the binding positions are head, chest, buttocks, shoulders, double large arms, double small arms, double hands, double large legs, double small legs and double feet. Movement of UWB indoor positioning system the tag is worn on the tactical helmet; wearing the data glove device with both hands; the hand-held electronic simulation gun is worn on the head of a tactical helmet with tipping VR glasses.
Each motion capture module, each hand joint gesture sensor and each electron gun gesture sensor acquire the azimuth information of each module node sensor through integration of the diagonal speed, and simultaneously acquire the azimuth of the module to the gravity direction and the geomagnetic direction through measurement of geomagnetic and gravity acceleration. The sensors of each module transmit acceleration, angular velocity and geomagnetic information to a microprocessor, the microprocessor integrates the acceleration for the second time to obtain displacement information of each part, and the integrated error of each module is corrected according to biomechanical constraint and external contact ending judgment. The microprocessor transmits the information such as acceleration, angular velocity, geomagnetic information, displacement information, azimuth information and the like of each module sensor to the backpack computer in a wired or wireless mode.
The mobile tag of the UWB indoor positioning system is worn at a tactical helmet of a user, and the user moves in a place where the positioning anchor nodes and the synchronizers are arranged. The mobile tag worn on the human body and the positioning anchor node perform data transmission based on UWB, the synchronizer performs timing communication with each anchor point, and each anchor point performs data transmission with the server through the wireless access node. The server outputs the absolute coordinates of the mobile tag in the spatial location by calculating the time difference between the tag and each anchor node, by an indoor positioning algorithm. The server sends the position information of the mobile tag to the backpack computer device through a wired or wireless mode.
The virtual reality device comprises a helmet type tipping VR glasses, a sound system and a plurality of electrode patches on the body of a user. The three-dimensional virtual space picture can be displayed by wearing helmet type tipping bucket VR glasses; the sound system feeds back various sounds in the virtual environment, and the electrode patch feeds back various stimuli of the virtual environment to the user. The virtual reality device is formed by integrating various collected information of an inertial motion capturing device, an indoor positioning device, a data glove device and an electronic simulation gun device through an algorithm, wherein the inertial motion capturing device, the indoor positioning information, the data glove information and the electronic simulation gun information are integrated through an emulation software output signal and a virtual reality device, and a helmet type tipping VR glasses, a sound system and an electrode patch are driven to act on a user to generate a deep immersion and vivid virtual environment.
The backpack computer runs simulation software, and the virtual reality device can generate a three-dimensional virtual space acting on a user, and the three-dimensional virtual space has some events which do not exist in the real world or occur with small probability. For example, the individual soldiers simulate special army encountered in training and tactical coordination in the event of sudden armed conflict, so that the task of extinguishing conflict is completed. In the virtual environment, a user can shoot, unify and other operations on armed personnel in the virtual environment by using an electronic simulation gun in the hand, and the role in the virtual can attack the user or attack and damage other users on the user. The user can carry out actions such as dodging, running, jumping, creeping, kneeling and the like in the face of armed personnel in the virtual environment, and meanwhile, the electronic simulation gun in the hand can kill and calm the virtual armed molecules. The multi-user glove can carry out the operation and communication of sign language and tactical actions, and can also carry out communication in a voice system mode. If the user is concentrated by other users and armed molecules in the virtual environment, the electrode patches in the virtual reality device generate stimulus signals corresponding to the attack intensity at opposite positions, so that the user generates a true hit feeling.
In accordance with the above examples, in combination with the prior art, the same and different points of the motion capture-based virtual reality integrated system of the present method and a common 3D playing game are explained.
The same points: both are users manipulating virtual characters to perform certain activities and experiences in a virtual 3D world environment. The difference is that: the invention is to operate immersive 3D virtual reality software, and control virtual roles by means of limb actions, finger actions, simulated gun actions and languages of users, just as general real world people operate themselves, and common 3D playing games control roles by using a mouse and a keyboard; meanwhile, a common 3D role playing game intelligently sees a plane image on a display, only sees roles played by the user and roles in the environment, but cannot experience interactions between the roles in the game and surrounding environments by other senses, when the virtual reality integrated system is adopted, a corresponding three-dimensional view mirror of the 3D virtual environment can be provided according to changes of the roles in the virtual environment, the sense of reality is improved, a user feels like being in the scene, and meanwhile, through an environment feedback device, the user can experience interactions between the virtual environment and the real roles through other parts of the body.
In conclusion, the motion capture module, the hand joint gesture sensor and the electron gun gesture sensor have the advantages of small volume, light weight and convenient wearing, do not influence the motion when being bound to a human body, have high sampling speed, and can be used for novel sampling of complex and high-speed motion; the wearing is flexible, and a proper wearing combination mode can be selected according to the actual demand; the motion capture is not limited by the field, and the motion capture effect is not influenced by the shielding of a real object; the cost of motion capture is relatively low. The indoor positioning device can capture and position real-time positions of a plurality of users in a space where the positioning device is deployed in real time, and output absolute coordinates of the users; the indoor positioning device adopts UWB positioning technology, has high sampling frequency, can position and position the user in real time, and can quickly position the quick action of the user; the wearing is flexible, the tag can be worn on the head, the chest and the wrist, and the tag can be worn according to specific requirements; the deployment is simple and convenient, and the positioning deployment can be completed only by deploying a plurality of anchor nodes, synchronizers, a small amount of auxiliary power supplies and other devices in the space needing to be positioned; the positioning is not affected by the environment and the optical fiber, and the positioning can be deployed and positioned in open outdoor places without being affected by light; UWB indoor positioning costs are relatively low. The data glove is convenient to wear, the module is small, the data glove can work only by wearing a special data glove carrier and connecting with a backpack computer, and the use is convenient; the configuration is flexible, different joints can be configured according to specific requirements, and virtual experience is completed in the most suitable configuration mode; the virtual experience can be carried out under direct sunlight without being influenced by the light environment; the sampling frequency is high, and complex and quick-acting capturing and sampling can be performed.
In addition, the electronic simulation gun, the virtual reality glasses and the backpack computer technology solve the problem of real-time restoration of the wearing type, the gesture and the game state, and improve the user experience. The data glove technology, the virtual reality glasses and the backpack computer technology solve the problem of real-time reduction display of the wearable limbs and fingers and provide user experience. The data fusion and geomagnetic anti-interference technology reduces the interference of a complex magnetic field environment to an electronic compass sensor, and improves the fitness of the physical environment and the user experience.
According to the invention, the human action gesture in the real world and the state of the handheld prop of the peripheral equipment can be introduced into the virtual reality in real time and mapped to the corresponding role, and the role action of the virtual environment on the role is fed back to the perception of the human in the real world in real time in an appropriate mode, so that the immersion of the virtual reality is greatly improved, and meanwhile, the interactivity of the role and the virtual environment is increased, so that the user experience is more true and real.
The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the specific details of the above embodiments, and various equivalent changes can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the equivalent changes belong to the protection scope of the present invention.