Disclosure of Invention
In view of the above problems, an object of the present invention is to provide an object grabbing interaction method under a complex motion constraint in a virtual reality system. The invention mainly aims at establishing a plurality of limiting triggers and a state trigger for the range of motion of a constrained object in a virtual environment and setting a barrier for constraining motion. If the object is in the state trigger, indicating that the object is in a motion constraint state; if the object is in a constrained state, when the object moves in the moving range and touches a certain limiting trigger, the object is judged to continue touching the limiting trigger in the next frame according to the motion trend, then the current virtual object can not continue to move forward, the motion state of the current virtual object is unchanged, meanwhile, the virtual hand grasping the virtual object can not move along with the hand of the user in the real world, and the continuous state that the virtual hand grasps the virtual object is kept, so that the interactive operation simulation effect is ensured to be in accordance with the motion relation of the hand grasping the object in the real world. At the moment, if the user continues to move the operating handle towards the movement trend direction, the vibration function of the operating handle reminds the user that the movement of the virtual hand is limited, the vibration amplitude and the frequency become larger along with the increase of the distance, and when the moving distance reaches a certain value, the virtual hand is disconnected from the virtual object.
In order to achieve the purpose, the invention adopts the following technical scheme:
an object grabbing interaction method under complex motion constraint in virtual reality comprises the following steps:
1) setting a relation between a virtual hand and a tracking positioning point of the interactive equipment, setting an object limiting constraint condition, and establishing a standard position hanging point of the object and the hand;
2) controlling a virtual hand to grab an object;
3) judging whether the object is in a constrained state or a free unconstrained state at present;
4) if the object is in the constrained state, judging that the virtual hand grabs the object when the distance between the virtual hand and the object is smaller than a set value, and then entering the step 5; if the object is in a free and unconstrained state, the object is directly attached to the virtual hand, the virtual object completely moves along with the virtual hand, and the grabbing interaction is completed;
5) after the virtual hand grabs the object, displaying a proper gesture for grabbing the object by adjusting the posture of the virtual hand;
6) calculating the motion trend of the next frame of the hand in real time according to the positions of the grabbed objects of the previous frame and the current frame;
7) judging whether the next frame of object is in a motion constraint state or a free motion state according to the motion trend;
8) according to the judgment result, when the next frame of object is in the motion constraint state, the virtual hand stops moving, and when the next frame of object is in the free motion state, the virtual hand continues moving.
9) The virtual hand releases the object.
Preferably, the object limitation constraint conditions are set in step 1) as follows: a plurality of limit triggers are provided in the range and direction to which the object is to be limited, each limit trigger having a direction of movement that inhibits passage.
Preferably, the object limitation constraint conditions are set in step 1) as follows: a state trigger is provided for determining whether the object is in a constrained state.
Preferably, in the step 1), the relationship between the virtual hand and the tracking positioning point of the interactive device is set as follows: establishing a virtual hand calculation management type HandAnchor in a virtual environment, wherein the position and the posture of the HandAnchor are directly consistent with the hand movement of a real-world user; creating a sub-level object HandOffset under the HandAchor node, wherein the position and the posture of the HandOffset represent the position and the posture of a virtual hand; when the freely moving object is operated, the position of the HandAchor is consistent with that of the HandOffset, namely the virtual hand and the operated object completely follow the hand motion of the real-world user; when the object in the constrained state is operated, the position of the HandAchor is inconsistent with that of the HandOffset, and the virtual hand and the operated object cannot follow the hand motion of the real-world user in the specific constrained region.
Preferably, in the step 5), the virtual hand posture is adjusted as follows: firstly, acquiring hanging point position information of an operated object, and converting the hanging point position information into values under a world coordinate system, wherein the values comprise positions and postures; then, acquiring position information of HandOffset, and converting the position information into a value under a world coordinate; and finally, calculating a HandOffset intermediate value by using a linear interpolation algorithm for the HandOffset under the world coordinates, and gradually moving the virtual hand to the hanging point position through assignment operation.
Preferably, the hand next frame motion trend is calculated in the step 6) as follows: and reserving the position information of one frame on the virtual palm, and subtracting the position of the current frame of the virtual palm from the position of the previous frame of the virtual palm to obtain a vector, namely the motion trend of the current frame of the virtual palm, including the direction and the distance.
Preferably, in the step 7), it is determined that the object in the next frame is in the motion constraint state according to the following manner: the object is in a restraint state at present, the obstacle restraining the movement is already touched with the limiting trigger, and the included angle between the movement trend of the virtual hand and the movement direction forbidden to pass by the limiting trigger touching the obstacle restraining the movement is smaller than 90 degrees. And the motion trend vector of the virtual hand and the motion direction vector of the obstacle which restricts motion touch are prohibited from being subjected to point multiplication by the limiting trigger, the point multiplication is judged by the positive and negative of the point multiplication result of the two vectors, and if the point multiplication result is greater than 0, the included angle between the two vectors is smaller than 90 degrees.
After the grabbing action is finished, the virtual hand can actively release the object; and when the distance exceeds a limit value due to the fact that the virtual hand is inconsistent with the tracking positioning point, the virtual hand passively releases the object.
Due to the adoption of the technical scheme, the invention has the following technical advantages: the method correctly processes the operation simulation problem that the user grasps the object under the complex motion constraint in the virtual environment under the condition of no motion constraint in reality, realizes that the operation object in the virtual space meets the constraint relation of mutual blocking between the entities in the interactive operation process, avoids the problem that the object grasped by the virtual hand passes through the constraint limit and the problem that the object grasped by the virtual hand jumps in the process of grasping the object, and enhances the reality of the interactive operation in the virtual environment. The method is suitable for continuous operation simulation of complex objects which can move in multiple degrees of freedom within a certain motion constraint range in a virtual environment operated by simple interactive equipment.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
In a virtual reality scenario, a user wears a virtual reality helmet, an operating handle, or a data glove (hereinafter, an operating handle) with position and posture tracking. The user moves the operation handle according to the virtual scene image observed by eyes, and operates the object limited to move in a certain range in the virtual environment. The technical flow for implementing the interactive operation is shown in fig. 1.
Step 1: initial setting
Firstly, setting the relation between the virtual hand and the tracking positioning point of the interactive equipment. A virtual hand calculation management class HandAnchor is established in a virtual environment, and the position and the posture of the HandAnchor are directly driven by tracking and positioning data of an operating handle. And creating a sub-level object HandOffset under the HandAchor node, wherein the position and the posture of the HandOffset represent the position and the posture of the virtual hand. When the freely moving object is operated, the position of the HandAchor is consistent with that of the HandOffset, namely the virtual hand and the operated object completely follow the hand motion of the real-world user; when the object in the constraint state is operated, the virtual hand and the operated object are constrained, the position of the HandAchor is inconsistent with that of the HandOffset, and the hand movement of the real-world user cannot drive the constrained virtual hand to move within a certain range.
Second, object limit constraints are set. The movement of the operated virtual object is limited by other objects in the virtual environment within a certain range, and the limited range and direction are defined by designing a plurality of limiting triggers. The position, size and number of the limiting triggers depend on the range which can not be moved; the Z-axis orientation of each limit trigger depends on the direction of motion that cannot be passed through. In addition, a state trigger is added for judging whether the object is in a constraint state.
For example, a square ring-shaped object, the motion of which is limited by a central post, as shown in fig. 2. Then, four limiting triggers (1) and a state trigger (2) should be added, the limiting triggers (1) are respectively positioned on four edges, the upright is a barrier (3), and the direction of the Z axis (4) is from inside to outside, which indicates that when the upright touches a certain edge, the upright cannot move outwards; the state trigger (2) is positioned in the square and is not in contact with any side, if the upright post is positioned in the state trigger, the upright post is positioned in the square, namely, an object is bound.
And then establishing a hanging point of the object and the hand standard position. In order to ensure that the virtual palm is attached to the grasped object when the virtual hand grasps the object, a standard position hanging point of the object and the hand needs to be established and used as a reference standard, the hanging point is located on the object, and when the position of the virtual palm is completely overlapped with the position of the hanging point, the virtual palm is perfectly attached to the object.
The hanging point is established in such a way that in an editing state, the virtual palm and the object are placed in the same level, and after the virtual palm and the object are adjusted to a matching position, an empty object is established as a hanging point, the position of the empty object is consistent with that of the virtual palm, the empty object is placed below the object level and is used as a subset of the object, and therefore the relative positions of the hanging point and the object can be kept static.
Step 2: start to grab
The user holds the operating handle and moves, presses the button on the handle, and controls the virtual hand to open and grasp, so as to realize the grabbing action of the virtual object.
And step 3: object constraint state determination
If the obstacle restraining the movement is in contact with the state trigger, the object is in a restraining state, otherwise the object is in a free movement state. If the object is in the constrained state, the method is used for grabbing; if the object is in a free motion state, the virtual object is directly attached to the virtual hand, and the virtual object completely moves along with the virtual hand.
And 4, step 4: determination of object to be grabbed
The virtual hand is used for judging whether the virtual hand grasps the object or not, if the object is grasped, the virtual hand is adjusted to be in a proper position and posture in a mode of matching the hand shape with the object, the virtual hand is attached to the virtual object to complete the effect of grasping the virtual object, and the problem of jumping of the virtual object under the constraint state of grasping by the virtual hand is avoided.
The principle of judging the object grabbed by the virtual hand is as shown in fig. 3, after the program acquires the grabbing command in the constrained state, the program calculates the distance between the virtual hand and the hanging point of the operated object, if the distance is smaller than a set value, the operated object is in the grabbing range, and the position of the virtual palm is adjusted according to the method of the next step, so that the virtual palm is attached to the object, and the object is grabbed without jumping.
And 5: virtual hand position adjustment
Firstly, acquiring hanging point position information of an operated object, and converting the hanging point position information into values under a world coordinate system, wherein the values comprise positions and postures; then, acquiring position information of HandOffset, and converting the position information into a value under a world coordinate; and finally, Lerp linear interpolation is used for the HandOffset under the world coordinate, the intermediate value of the HandOffset is calculated, the virtual hand is gradually and continuously moved to the hanging point position through assignment operation, and the effect that the virtual hand continuously and non-jump grabs the object is achieved. (the Lerp method of linear interpolation can realize the smooth transition of integer, floating point, vector and quaternion, and the parameters comprise a current value (Form), a target value (To) and a transition Time (Time). the method realizes that the smooth object continuously changes from the current value To the middle numerical value of the target value in the transition Time.)
Step 6: motion trend calculation
And calculating the motion trend of the hand by comparing the position of one frame on the virtual palm with the position of the current frame, and judging the effectiveness of the motion trend according to the method of the next step before the motion trend is converted into real motion. The motion trend calculation method comprises the following steps: reserving a frame of position information on the virtual palm; and subtracting the position of the current frame of the virtual palm from the position of the previous frame of the virtual palm to obtain a vector, namely the motion trend of the current frame of the virtual palm, including the direction and the distance. As shown in fig. 4.
And 7: judgment of effectiveness of movement trend
Judging that the movement trend is bound, three conditions need to be met: firstly, the object is in a constraint state at present; secondly, the obstacle restraining the movement is already touched with the limiting trigger; and thirdly, the included angle of the Z-axis of the limiting trigger for the contact of the motion trend of the hand and the obstacle restricting the motion is smaller than 90 degrees. Judging whether the included angle between the two vectors is larger than 90 degrees or not by the positive and negative dot product results of the two vectors, and if the dot product result is larger than 0, the included angle between the two vectors is between 0 and 90 degrees; if the dot product is less than 0, the two vectors are angled between 90 and 180 degrees. If the above three conditions are satisfied, the object is bound and cannot move.
The judgment principle is illustrated by the case shown in fig. 5, a black circle represents a rod-shaped obstacle, and a state trigger is arranged on the rod-shaped obstacle and used for detecting whether the rod-shaped obstacle is in contact with a frame; each side arrow of each box represents the limiting direction of the side, namely the limiting trigger Z-axis orientation of each side, the box 1 of the thick black line to the box 2 of the dotted line are the displacement controlled by the real interaction device and the displacement of the box in the expected virtual environment, and the box 3 of the thin black is the actual virtual displacement calculated after being blocked by the circular obstacle. One box expected displacement is calculated in each frame, and the box cannot actually move according to the expected displacement due to the limitation of the rod-shaped obstacle, namely if the box collides with the rod-shaped obstacle and moves towards the limited direction of the box, the movement of the box is limited,
and 8: virtual hand position and pose settings
According to the judgment result of the previous step, the virtual hand can be moved or the virtual hand can not be moved, and the position and the posture of the virtual hand are respectively set according to different conditions:
the virtual hand can move: if the hand can move, the value of HandOffset is not changed, the virtual hand is made to move along with the real world operation handle, and the virtual hand and the real hand are kept relatively still.
Virtual hand immobility: if the hand can not move, dynamically adjusting the value of HandOffset according to the motion trend (refer to the step 5, namely 'virtual hand position adjustment'), ensuring that the virtual hand does not move along with the tracking positioning point of the operating handle, simultaneously calculating the distance between the virtual hand and the tracking positioning point of the operating handle, reminding a user that the positions and postures of the virtual hand and the tracking positioning point of the operating handle are inconsistent through vibration and the like, wherein the larger the distance is, the larger the vibration frequency and the intensity are, and when the distance exceeds a preset value, the virtual hand automatically releases an object.
And step 9: releasing an object
There are two triggering situations for releasing the object: firstly, the user actively puts down; secondly, the distance between the virtual hand and the tracking positioning point of the operating handle is larger and exceeds a preset value.
When the object is released, the Lerp linear interpolation algorithm is used, so that the HandOffset is gradually changed back to the initial value, the virtual hand smoothly returns to the position of the tracking positioning point of the operating handle, and the position and the posture of the virtual hand are consistent.