CN118394222B - Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing - Google Patents
Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing Download PDFInfo
- Publication number
- CN118394222B CN118394222B CN202410821472.7A CN202410821472A CN118394222B CN 118394222 B CN118394222 B CN 118394222B CN 202410821472 A CN202410821472 A CN 202410821472A CN 118394222 B CN118394222 B CN 118394222B
- Authority
- CN
- China
- Prior art keywords
- force
- force feedback
- target
- vertex
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 claims abstract description 86
- 230000001965 increasing effect Effects 0.000 claims abstract description 8
- 230000033001 locomotion Effects 0.000 claims description 47
- 238000000034 method Methods 0.000 claims description 40
- 230000009466 transformation Effects 0.000 claims description 25
- 239000011159 matrix material Substances 0.000 claims description 24
- 230000002452 interceptive effect Effects 0.000 claims description 19
- 230000005540 biological transmission Effects 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 6
- 230000003313 weakening effect Effects 0.000 claims description 6
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000003993 interaction Effects 0.000 abstract description 8
- 238000003672 processing method Methods 0.000 abstract description 8
- 230000002035 prolonged effect Effects 0.000 abstract description 7
- 230000000875 corresponding effect Effects 0.000 description 52
- 230000008569 process Effects 0.000 description 28
- 210000000988 bone and bone Anatomy 0.000 description 17
- 238000010586 diagram Methods 0.000 description 17
- 210000000707 wrist Anatomy 0.000 description 12
- 210000001503 joint Anatomy 0.000 description 8
- 210000003205 muscle Anatomy 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 210000004394 hip joint Anatomy 0.000 description 3
- 210000003423 ankle Anatomy 0.000 description 2
- 210000000784 arm bone Anatomy 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 210000004935 right thumb Anatomy 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 210000001188 articular cartilage Anatomy 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000003109 clavicle Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000005786 degenerative changes Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000002310 elbow joint Anatomy 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000028709 inflammatory response Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 210000003041 ligament Anatomy 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003049 pelvic bone Anatomy 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005086 pumping Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000979 retarding effect Effects 0.000 description 1
- 238000009958 sewing Methods 0.000 description 1
- 210000000323 shoulder joint Anatomy 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 210000003857 wrist joint Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a virtual-real combined force processing system, a virtual-real combined force processing method, a force collection garment and a force feedback garment. Haptic equipment on the force feedback clothing is matched through force collection clothing, processed force data are fed back to the user, force feedback received in the video is simulated, force is accurately felt at each part of the user, interaction experience of force on a character in the video is simulated more truly, and immersive experience of more details of the user is enhanced. Further, a new video watching system is created, and the video watching system has expansibility, so that the service life of the video is prolonged, and meanwhile, the user viscosity is increased.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual-real combined force processing system, a virtual-real combined force processing method, a force collection garment and a force feedback garment.
Background
With the continuous development of television media, more and more excellent movie works appear in front of people. In the traditional film watching process, the user experience mainly takes sense of sense and hearing in sense, so that the film watching experience of the user is improved, the existing single film watching mode is changed, and the common mode is that the user is provided with different interactive experiences through seat vibration, shaking or other modes. However, these modes are rough, not truly fine and smooth enough, and are difficult to bring immersive experience to users.
Therefore, how to truly simulate the virtual environment, enhance the substitution sense of the user, and make the experience of the user finer is a problem to be solved in the present day.
Disclosure of Invention
Therefore, the embodiment of the invention provides a virtual-real combined force processing system, a virtual-real combined force processing method, a force collecting garment and a force feedback garment, so as to solve the problems of single user viewing mode and rough experience.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
The first aspect of the invention discloses a virtual-real combined force processing system, which comprises: force collecting garments, processing ends and force feedback garments.
The processing end is respectively in communication connection with the force collection garment and the force feedback garment.
The force collection garment comprises a plurality of pre-calibrated vertexes, each vertex is correspondingly provided with a force collection element, and the force feedback garment is at least provided with a plurality of force feedback elements corresponding to the pre-calibrated target vertexes, wherein each force feedback element is bound with each force collection element one by one.
The force collection garment is used for initializing coordinate information of each vertex and stress information of each vertex under the initial posture of an actor to obtain initial coordinate information and initial stress information; and collecting track coordinate information and real-time stress information generated by the movement of each vertex when the actor moves by using the force collecting element, and sending the track coordinate information, the real-time stress information and the identification ID of the actor to the processing end.
The processing end is used for receiving the track coordinate information, the real-time stress information and the identification ID; acquiring video data and combining a preset force feedback template, the track coordinate information and the real-time stress information to generate force feedback data corresponding to the identification ID; when a playing instruction containing a target identification ID is received, searching target force feedback data corresponding to the target identification ID, converting the target force feedback data into a touch signal, and sending the touch signal to the force feedback garment.
The force feedback garment is used for matching initial coordinate information of each target vertex and initial stress information of each target vertex under an initial posture of a user with the initial coordinate information and the initial stress information in the force collection garment respectively; and receiving the touch signal, generating an instruction according to the touch signal, and sending the instruction to a target force feedback element so that the target force feedback element outputs force corresponding to the instruction, wherein the target force feedback element is any force feedback element.
Preferably, the coordinate information of each vertex and the stress information of each vertex under the initial posture of the actor are initialized, so as to obtain the force collection garment with the initial coordinate information and the initial stress information, which is specifically used for:
acquiring coordinates of each preset joint on an actor when the actor is in an initial posture; for each vertex, acquiring the weight of each preset skeleton to the current vertex; and calculating a transformation matrix corresponding to each preset skeleton according to the corresponding target transformation matrix of each preset joint.
Calculating to obtain coordinate information of the current vertex based on the weight, the transformation matrix and the coordinates of the preset joint; marking the coordinate information of each vertex as initial coordinate information when the actor is in an initial posture; the force received by the force feedback elements at each of the vertices when the actor is in an initial pose is marked as initial stress information.
Preferably, the force collecting garment is configured to collect trajectory coordinate information generated by movement of each of the vertices of the actor during movement by using the force collecting element, and is specifically configured to:
Collecting a plurality of continuous motion coordinate information of each vertex and a time stamp corresponding to each motion coordinate information when the actor moves by using the force collecting element; and calculating the track coordinate information of each vertex based on all the continuous motion coordinate information and the corresponding time stamp of each motion coordinate information.
Preferably, the processing end for acquiring video data and generating force feedback data corresponding to the identifier ID by combining a preset force feedback template, the track coordinate information and the real-time stress information is specifically configured to:
acquiring video data; and loading the video data, the real-time stress information and preset force in a preset force feedback template to a front-end interactive interface.
Responding to the enhancement operation of the manager on the first stress information in the real-time stress information at the front-end interactive interface, and increasing the force degree of the first stress information to obtain first video data; and/or; responding to weakening operation of a manager on second stress information in the real-time stress information at the front-end interactive interface, and reducing the force degree of the second stress information to obtain second video data; and/or; and responding to the adding operation of the manager on the front-end interactive interface for the target video picture in the video data, and adding the preset force at the timestamp corresponding to the target video picture to obtain third video data.
Synthesizing the first video data and/or the second video data and/or the third video data to obtain force feedback data; and carrying out association storage on the force feedback data and the identification ID.
Preferably, the force feedback garment generates a command according to the haptic signal and sends the command to a target force feedback element, specifically for:
Obtaining target force feedback elements corresponding to the tactile signals from all force feedback elements; generating a corresponding instruction for each target force feedback element according to the haptic signal; and sending each instruction to the corresponding target force feedback element.
Preferably, the number of vertices on the force collecting garment is the same as the number of target vertices on the force feedback garment.
The second aspect of the present invention discloses a virtual-real combined force processing method, which is applied to the force processing system disclosed in the first aspect of the present invention, and the method comprises:
and collecting coordinate information of each preset vertex and stress information of each preset vertex under the initial posture of the actor by using the force collecting clothing, and initializing to obtain initial coordinate information and initial stress information.
And collecting track coordinate information and real-time stress information generated by the movement of each preset vertex when the actor moves by using a force collecting element in the force collecting clothing.
And transmitting the track coordinate information, the real-time stress information and the identification ID of the actor to a processing end.
And acquiring video data by using the processing end, and generating force feedback data corresponding to the identification ID by combining a preset force feedback template, the track coordinate information and the real-time stress information.
When the processing end receives a playing instruction containing a target identifier ID, searching target force feedback data corresponding to the target identifier ID, converting the target force feedback data into a touch signal, and sending the touch signal to a force feedback garment; wherein, the target identification ID is any one of the identification IDs.
And matching initial coordinate information of each preset target vertex and initial stress information of each preset target vertex under the initial posture of the user with the initial coordinate information and the initial stress information in the force collecting clothing respectively by using the force feedback clothing.
And receiving the touch signal by using the force feedback clothing, generating an instruction according to the touch signal, and sending the instruction to a target force feedback element in the force feedback clothing so that the target force feedback element outputs a force corresponding to the instruction.
In a third aspect, the invention discloses a force collection garment, the force collection garment is worn by an actor, the force collection garment at least comprises a plurality of pre-calibrated vertexes, each vertex is correspondingly provided with a force collection element, a vertex coordinate collector, and a data sending module and a data processing module which are connected with the vertex coordinate collector.
The force collecting element is used for collecting track coordinate information and real-time stress information generated by the movement of each vertex when the actor moves.
The vertex coordinate collector is used for collecting the track coordinate information and the real-time stress information collected by the force collecting elements arranged at the preset vertices.
The data processing module is used for initializing coordinate information of each vertex and stress information of each vertex under the initial posture of the actor to obtain initial coordinate information and initial stress information.
The data transmission module is used for positioning the force collecting element; converting the collected real-time stress information to obtain converted real-time stress information; transmitting the initial coordinate information and the initial stress information to a force feedback garment connected with the force collection garment; and sending the identification ID of the actor, the track coordinate information and the converted real-time stress information to a processing end in communication connection with the force collection clothing, generating a touch signal by the processing end according to the identification ID of the actor, the track coordinate information and the converted real-time stress information, and sending the touch signal to the force feedback clothing, so that the force feedback clothing outputs force based on the touch signal.
The fourth aspect of the present invention discloses a force feedback garment, the force feedback garment is worn by a user, the force feedback garment is respectively connected with a processing end and a force collecting garment, the force feedback garment at least comprises a plurality of target vertices which are calibrated in advance, each target vertex is correspondingly provided with a target force feedback element, a touch signal receiver, a data receiving module connected with the touch signal receiver, a matching module and an adjusting module.
The target force feedback element is used for collecting initial coordinate information of each target vertex and initial stress information of each target vertex under the initial posture of the user.
The matching module is used for matching the initial coordinate information of each target vertex and the initial stress information of each target vertex with the initial coordinate information and the initial stress information sent by the force collection clothing respectively; the initial coordinate information and the initial stress information are obtained by the force collection clothing by collecting coordinate information of each vertex and stress information of each vertex under the initial posture of an actor and initializing.
The touch signal receiver is used for receiving the touch signal sent by the processing end, wherein the touch signal is generated by the processing end receiving the identification ID, the track coordinate information and the converted real-time stress information of the actor sent by the force collecting clothing.
The data receiving module is used for positioning the target force feedback element, generating an instruction according to the touch signal and sending the instruction to the target force feedback element in the force feedback garment so that the target force feedback element outputs force corresponding to the instruction.
And the adjusting module is used for receiving the adjusting instruction input by the user and adjusting the degree of the force output by the target force feedback element according to the adjusting instruction.
Based on the virtual-real combined force processing system, the virtual-real combined force processing method, the force collection garment and the force feedback garment provided by the embodiment of the invention, the force information of an actor is truly captured through the force collection garment, the force information is edited and processed by a processing end, and the processed force data and video are synthesized. Haptic equipment on the force feedback clothing is matched through force collection clothing, processed force data are fed back to the user, force feedback received in the video is simulated, force is accurately felt at each part of the user, interaction experience of force on a character in the video is simulated more truly, and immersive experience of more details of the user is enhanced. Further, a new video watching system is created, and the video watching system has expansibility, so that the service life of the video is prolonged, and meanwhile, the user viscosity is increased.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a virtual-real force processing system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of force collection and force feedback provided by an embodiment of the present invention; wherein, (a) is a force collection schematic diagram, and (b) is a force feedback schematic diagram;
FIG. 3is a schematic illustration of the calibration of bones and joints and vertices of a human body according to an embodiment of the present invention; wherein, (a) is a calibration schematic diagram of human bones, (b) is a calibration schematic diagram of human joints, and (c) is a calibration schematic diagram of human vertexes;
FIG. 4 is a schematic view of a joint level tree provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of vertex stress according to an embodiment of the present invention; wherein, (a) is a single-vertex 2-dimensional graph and (b) is a multi-vertex 3-dimensional graph;
FIG. 6 is a schematic diagram of a front-end interface for processing video and force data by a processing end according to an embodiment of the present invention;
FIG. 7 is a flow chart of a force collecting garment and process side data transmission provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of force transmission and attenuation provided by an embodiment of the present invention;
Fig. 9 is a flowchart of a virtual-real combined force processing method according to an embodiment of the present invention;
FIG. 10 is a schematic view of a force collecting garment according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a force feedback garment according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the conventional film watching process, the audience experiences the work through audio and video, and the user experience is mainly based on the sense of vision and hearing in the sense of sense. Some common interaction modes are also limited to different interactive experiences to the user at the movie counterpart nodes, by seat shake, jolt, or other means (e.g., water, wind, etc.). The current interactive mode is rough, not true and fine enough, and is difficult to bring immersive experience to users.
Therefore, the embodiment of the invention provides a virtual-real combined force processing system, a virtual-real combined force processing method, a force collection garment and a force feedback garment, wherein stress information of an actor is truly captured through the force collection garment, the stress information is edited and processed by a processing end, and processed force data and videos are synthesized. Haptic equipment on the force feedback clothing is matched through force collection clothing, processed force data are fed back to the user, force feedback received in the video is simulated, force is accurately felt at each part of the user, interaction experience of force on a character in the video is simulated more truly, and immersive experience of more details of the user is enhanced. Further, a new video watching system is created, and the video watching system has expansibility, so that the service life of the video is prolonged, and meanwhile, the user viscosity is increased.
It should be noted that the embodiments of the present invention may also be applied to the fields of film, movie, animation, game, education, etc., and are helpful for playing a great improvement advantage on the technical system and accessories in the present invention.
Referring to fig. 1, a schematic diagram of a virtual-real combined force processing system according to an embodiment of the present invention is shown, where the system includes: a force collecting garment 1, a treatment end 2 and a force feedback garment 3.
Specifically, the processing end 2 is respectively connected with the force collecting clothing 1 and the force feedback clothing 3 in a communication way.
The force collecting garment 1 comprises a plurality of pre-calibrated vertexes, and each vertex is provided with a force collecting element. The force feedback garment 3 is at least provided with a plurality of force feedback elements corresponding to the target vertices calibrated in advance, wherein each force feedback element is bound with each force collecting element one by one.
It will be appreciated that the number of vertices on the force collecting garment 1 is the same as the number of target vertices on the force feedback garment 3.
The higher the accuracy of the force collecting element of the force collecting garment 1 on the actor, the more accurate the force information can be collected. The more force feedback elements matched with the force feedback clothing 3 on the user body, the finer and more accurate the force feedback can be generated.
It will be appreciated that as shown in the force collection schematic of fig. 2 (a), the actor wears the force collection garment 1, and the action and stress of the actor are captured by the force collection garment 1. As shown in the force feedback schematic diagram in fig. 2 (b), the user (viewing user) wears the force feedback garment 3, and outputs the processed action and stress conditions of the actor through the force feedback garment 3, and the action and stress conditions act on the user, so as to bring immersive viewing experience to the user.
Specifically, the force collecting garment 1 is used for initializing coordinate information of each vertex and stress information of each vertex under an initial posture of an actor to obtain initial coordinate information and initial stress information; the force collecting element is used for collecting track coordinate information and real-time stress information generated by the movement of each vertex when the actor moves, and the track coordinate information, the real-time stress information and the identification ID of the actor are sent to the processing end 2.
It will be appreciated that the functions of the force collecting element include, but are not limited to: representing the pressure of the collision of the self-force collecting element with the self-force collecting element; representing the pressure of collision of the self force collecting element and the external force; representing the pressure exerted by gravity on itself; and outputs the force data to the processing terminal 2.
It should be noted that, the specific process of initializing the coordinate information of each vertex and the stress information of each vertex under the initial posture of the actor to obtain the initial coordinate information and the initial stress information by the force collecting garment 1 is as follows:
process A1: and acquiring the coordinates of each preset joint on the actor when the actor is in the initial posture.
As shown in fig. 3, the bones (Skeleton), joints (Joints) and vertices (Vertex) of the human body are calibrated in advance, and in particular, fig. 3 (a) is a schematic diagram of calibration of the bones of the human body; for example, a1 in fig. 3 (a) is a human clavicle, b2 is an upper arm bone, and c3 is a lower arm bone. FIG. 3 (b) is a schematic illustration of the calibration of a human joint; for example, in fig. 3 (b), a is a shoulder joint, b is an elbow joint, and c is a wrist joint. FIG. 3 (c) is a schematic illustration of the calibration of the human vertex; for example, in fig. 3 (c), 1 is the first vertex of the shoulder, 2 is the second vertex of the upper arm, and 3 is the third vertex of the lower arm.
It should be noted that, the stress point of the human body is a vertex, and the nature of the human body that moves after receiving the acting force is that the position of the vertex changes. Wherein the apex is affected by bones and joints.
Further, when the human body moves, the joints are affected by the force to change the position (e.g., move, rotate, etc.). Bones also have an effect on the apex when the body is subjected to forces.
Therefore, the coordinate information of the vertex can be obtained based on the joint coordinates, the influence specific gravity of the bone on the vertex, and the transformation matrix corresponding to the bone.
It will be appreciated that the coordinates of each preset joint on the actor are obtained when the actor is in an initial pose (e.g., standing pose), which is the coordinates of the joint coordinate system. The joint coordinate system is a coordinate system established by taking a certain joint as an origin.
Process A2: and aiming at each vertex, acquiring the weight of each preset skeleton to the current vertex.
It should be noted that, one vertex may be affected by a plurality of bones, and the closer the vertex is to the bones, the greater the specific gravity of the vertex affected by the bones, which is also referred to as a weight. One vertex is weighted by the influence of each bone to sum to 1. For example, as shown in FIG. 3, vertex_2 is affected by Skeleton _a1 and Skeleton _b2. However Skeleton _b2 has a greater influence on vertex_2, i.e. Skeleton _b2 has a greater weight on vertex_2; while vertex_2 is substantially unaffected by Skeleton _c3.
Thus, for each vertex, the weight of each preset bone for the current vertex is obtained, labeled W.
Process A3: and calculating a transformation matrix corresponding to each preset skeleton according to the corresponding target transformation matrix of each preset joint.
As shown in fig. 4, a joint level tree is constructed from joints of a human body: the human pelvic bone is used as a root node for construction. Each node correspondingly stores the transformation matrix of the relative parent node, and is marked as a target transformation matrix.
It should be noted that, based on the transformation matrix corresponding to the preset skeleton (Skeleton), the coordinates of the preset joint may be converted from the joint coordinate system to the world coordinate system. Wherein, each preset skeleton corresponds to a transformation matrix; the transformation matrix corresponding to the preset skeleton is the product of translation, rotation and scaling matrices from the joint coordinate system to the world coordinate system.
It will be appreciated that the transformation matrix is accumulated for each target transformation matrix, for example: the transformation matrix for each node, i.e. the product of each target transformation matrix on the path from the root node to that node in the joint level tree.
It should be noted that the process of solving the transformation matrix according to the target transformation matrix is the nature of forward dynamics.
Process A4: and calculating to obtain the coordinate information of the current vertex based on the weight, the transformation matrix and the coordinates of the preset joint.
It should be noted that, based on the weight of each preset skeleton to the current vertex, the transformation matrix corresponding to each preset skeleton, and the coordinates of the preset joints, the coordinate information of the current vertex in the world coordinate system is calculated.
For example: for vertex A, the weight of skeleton a affecting vertex A is WA, the transformation matrix is WA, the coordinate is V, and the coordinate information in the world coordinate system is sum (WA) calculated, wherein the sum of the weights of all skeletons affecting vertex A is 1. That is, the transformation is performed according to the weight ratio of each bone, that is, the transformation matrix of each bone is actually interpolated.
Process A5: the coordinate information of each vertex when the actor is in the initial pose is marked as initial coordinate information.
Process A6: the force received by the force feedback elements at each vertex when the actor is in the initial pose is marked as initial stress information.
Specifically, the specific process of the force collecting garment 1 using the force collecting element to collect the trajectory coordinate information generated by the movement of each vertex when the actor moves is as follows:
Process B1: a plurality of continuous motion coordinate information of each vertex and a time stamp corresponding to each motion coordinate information are collected by a force collecting element when the actor moves.
It should be noted that, when the actor moves, the force collecting element at each vertex collects a plurality of continuous motion coordinate information of each vertex in the movement process of the actor, and a time stamp corresponding to each motion coordinate information, that is, a time stamp when the vertex is at a certain coordinate position is recorded.
It will be appreciated that the force collecting element collects real-time force information while collecting a plurality of consecutive motion coordinate information for each vertex and a time stamp corresponding to each motion coordinate information. Such as shown in fig. 5, wherein fig. 5 (a) is a single vertex 2-dimensional graph; fig. 5 (b) is a multi-vertex 3-dimensional graph. The force intensity information recorded or generated by each force collecting element at different times is represented by a single vertex 2-dimensional graph of fig. 5 (a) and a multi-vertex 3-dimensional graph of fig. 5 (b).
Process B2: track coordinate information of each vertex is calculated based on all continuous motion coordinate information and the corresponding time stamp of each motion coordinate information.
The continuous motion coordinate information of each vertex is fitted (for example, by using an interpolation algorithm or a mathematical model) to obtain a smooth motion track. And then optimizing the smooth motion track by combining the time stamp corresponding to each motion coordinate information to obtain track coordinate information of each vertex.
It will be appreciated that the force-collecting garment 1 transmits the trajectory coordinate information of each vertex, the real-time stress information of each vertex, and the identification ID of the actor to the processing terminal 2, so that the processing terminal 2 processes the information collected by the force-collecting garment. The identification ID of the actor is used for indicating that the current track coordinate information of each vertex and the current real-time stress information of each vertex are motion track information and force data in a role represented by the current actor respectively.
That is, each video includes a plurality of characters, and the actor wearing force collection clothing 1 for each character performs deduction, so that the motion trail information and force data corresponding to the character can be collected.
It is to be noted that, gather motion trail information and the power data that a plurality of roles correspond, be favorable to the user to experience different role visual angles, can help improving user's experience sense, also be favorable to improving user's viscidity simultaneously, increase video life.
Specifically, the processing end 2 is configured to receive track coordinate information, real-time stress information and an identifier ID; acquiring video data and combining a preset force feedback template, track coordinate information and real-time stress information to generate force feedback data corresponding to an identification ID; when a play command containing the target identification ID is received, target force feedback data corresponding to the target identification ID is searched, the target force feedback data is converted into a touch signal, and the touch signal is sent to the force feedback garment 3.
It can be understood that the specific process of the processing end 2 obtaining the video data and combining the preset force feedback template, the track coordinate information and the real-time stress information to generate the force feedback data corresponding to the identification ID is as follows:
The processing end 2 obtains video data, and loads the video data, the real-time stress information and preset force in the preset force feedback template to the front-end interactive interface. For example, as shown in fig. 6, a video frame is displayed on a front-end interactive interface according to video data; displaying the force information in a force rail form at a front-end interactive interface according to the real-time force information (receiving the force information in FIG. 6); and displaying the preset force of the preset force feedback module in a front-end interactive interface in a force rail mode (such as self-defining force information in fig. 6).
It can be understood that, the manager can clip the force rail on the front end interface to obtain force feedback data, and then store the current force feedback data and the identifier ID in an associated manner to refer to the force feedback data of the character corresponding to the identifier ID.
Specifically, the processing end 2 responds to the enhancement operation of the manager on the first stress information in the real-time stress information at the front-end interactive interface, and increases the force degree of the first stress information to obtain first video data; that is, the manager selects the first stress information and performs the enhancement operation on the first stress information, and the processing end 2 performs the enhancement processing on the force degree of the first stress information in response to the enhancement operation of the manager; for example, the processing end 2 uses the editor to enhance or combine with the preset force assistance in the preset force feedback template to achieve the effect of exceeding the real force feedback information.
It will be appreciated that when the manager determines that the first stress information has benign feedback on the movie and the user experience, such as shooting recoil or explosion atmosphere, the manager may perform operations such as retaining or enhancing the first stress information.
Specifically, the processing end 2 may further perform reduction processing on the force degree of the second stress information in response to the weakening operation of the manager on the second stress information in the real-time stress information at the front-end interactive interface, to obtain second video data; that is, the manager selects the second stress information and performs the weakening operation on the second stress information, and the processing end 2 performs the reduction processing on the degree of force of the second stress information in response to the weakening operation of the manager; for example, the processing end 2 uses the corresponding cancellation of the editor to generate force feedback information, or adjusts the force feedback information to conform to the viewing experience.
It will be appreciated that when the manager determines that the second stress information has a false or negative feedback on the film and the user experience, for example, if some shots are shot upside down, or if the actor is put upside down, operations such as weakening or cancelling the second stress information may be performed.
Specifically, the processing end 2 may further add a preset force at a timestamp corresponding to a target video picture in the video data in response to an adding operation performed by a manager on the target video picture in the front-end interactive interface, so as to obtain third video data; that is, the administrator selects the target video frame and performs the adding operation on the target video frame, and then the processing end 2 responds to the adding operation of the administrator to add the preset force at the timestamp corresponding to the target video frame, so as to obtain the force feedback information for improving the experience of the user.
Specifically, the processing end 2 synthesizes the first video data and/or the second video data and/or the third video data to obtain force feedback data; and the force feedback data and the identification ID are stored in a correlated way.
It can be appreciated that a plurality of preset force feedback templates are pre-stored in the processing end 2, and the preset force feedback templates can be used in a scenario of some force information which is convenient to collect, so as to improve the force feedback experience of the user in atmosphere.
The function and content of the preset force feedback template comprise: intensity control of force (light, medium, heavy, etc.); stress range zone control (small range, medium range, large range, etc.); stress duration versus strength curve; whether the stress expands (e.g., electric shock effect simulation); the type of force (external force such as wind force, hydraulic force, etc.); the forces on emotional atmospheres (tension, violence, relaxation, softness, etc.).
To sum up, as shown in the schematic diagram of fig. 7, first, the force collecting garment 1 uses a force collecting element to collect vertex space information (that is, the motion coordinate information) of each vertex in the force collecting garment 1 under the situation that the actor moves, and collect time of the actor's motion, specifically, a time stamp corresponding to each motion coordinate information; the force-collecting garment 1 further processes the vertex space information and time to obtain a space vertex information coordinate track (i.e., the track coordinate information). The force collecting clothing 1 sends the coordinate track of the spatial vertex information and the real-time stress information to the processing end 2, and the processing end 2 processes or does not process the real-time stress information by combining the prefabricated stress template (namely the preset force feedback template) and the operation of a manager on the front-end interactive interface. That is, the original force data is retained when not processed, and the force data can be cancelled (i.e. not stressed), or the force data a, the force data b and the force data c when processed.
In some embodiments, the processing end 2 also considers the influence of the external force on each vertex of the human body when processing the real-time stress information collected by the force collecting garment 1, for example, simulating the change of each vertex information of the external force applied to the human body.
It should be noted that, when an external force is applied to a person, force transmission changes the position of the peak of the skin, and the force is spread and attenuated on the skin (the spreading range is determined according to the force); for example, as shown in fig. 8, when a force acts on bone a, the force is transmitted to joint a and bone b, and is transmitted in the form of attenuation. Therefore, force transmission attenuation information calibration can be carried out at different positions of the skeleton and the joint corresponding to the human body, the force is set to act on the human body by customizing the resistance of the skeleton and the joint, and the change of vertex information generated when the force is influenced is recorded.
For example: when a force is applied to a joint such as a knee joint, hip joint, etc. with specialized cushioning structures (e.g., menisci and intervertebral discs), they are effective in absorbing and dispersing the impact forces, reducing the pressure on the articular surfaces. If the force does not exceed the limit of the joint of the steel, the joint can almost completely recover the original state after the external force is removed, and no obvious residual deformation exists. When a force is applied to the muscle, the muscle is able to absorb and store part of the energy, and when the external force is removed, the muscle slowly returns to its original state. Local deformations of the force point on the muscle will spread rapidly over the whole stressed area, so that the force effect will be manifested as an overall compression and deformation.
Specifically, the force feedback garment 3 is configured to match initial coordinate information of each target vertex and initial stress information of each target vertex in an initial posture of a user with initial coordinate information and initial stress information in the force collection garment respectively; receiving a touch signal, generating an instruction according to the touch signal and sending the instruction to a target force feedback element so that the target force feedback element outputs a force corresponding to the instruction, wherein the target force feedback element is any force feedback element.
It should be noted that, the force feedback garment 3 initializes the positions of the target vertices of the user in the initial posture to obtain initial coordinate information and initial stress information, and then matches the initial coordinate information of each of the target vertices and each of the vertices in the force collecting garment with the initial stress information of each of the vertices.
For example: a represents force collecting clothing, B represents force feedback clothing, and each skeleton of A corresponds to each skeleton of B, such as right thumb fingertip on A corresponds to right thumb fingertip on B. Each joint of a corresponds to each joint of B, e.g., the hip joint on a corresponds to the hip joint on B.
Also for example: the initial posture of the force collection clothing is APOSE standing posture, the initial posture of the force feedback clothing is sitting posture, and each vertex information of the initial posture of the force collection clothing is matched with each vertex information of the initial posture of the force feedback clothing. For example, in the standing position, the wrist relative spatial coordinates of the force collecting garment are (1, 1). And the wrist relative space coordinates of the force feedback clothing are (2, 2), and the wrist coordinates of the standing posture state of the force collection clothing and the wrist coordinates of the sitting posture state of the force feedback clothing are unified to be (0, 0). So as to achieve the purpose that when the vertex of the force collection garment is spatially displaced (for example, becomes (0, 1)), the vertex coordinates of the force feedback garment are also displaced (0, 1).
For another example: at frame 1, when the wrist vertex of the force-collecting garment is spatially displaced from the initial pose (e.g., becomes (0, 1)), the coordinates of the wrist vertex of the force-feedback garment will also be displaced by (0, 1). At this time, the relative spatial coordinates of the wrist vertex of the force feedback garment are (2, 3).
At frame 2, when the wrist of the force-collecting garment is spatially displaced (e.g., becomes (0, 2)), the coordinates of the wrist vertices of the force-feedback garment will also be displaced (0, 2). At this time, the relative spatial coordinates of the wrist vertex of the force feedback garment are (2, 4).
It should be noted that, the specific process of the force feedback garment 3 generating the command according to the haptic signal and sending the command to the target force feedback element is as follows:
process C1: and obtaining a target force feedback element corresponding to the tactile signal from all the force feedback elements.
It will be appreciated that the haptic signals indicate which target vertices are subject to force, which target vertices are moving and which target vertices are subject to how much force, and thus the force feedback garment 3, upon receiving the haptic signals, obtains the corresponding target force feedback elements from all force feedback elements.
Process C2: a corresponding command for each target force feedback element is generated from the haptic signal.
That is, according to the magnitude of the force received by the target vertex indicated in the haptic signal and the movement track of the target vertex, a corresponding command for each target force feedback element is generated.
Process C3: each instruction is sent to its corresponding target force feedback element.
It can be understood that the force feedback garment 3 sends each command to its corresponding target force feedback element, so that the target force feedback element outputs a corresponding force according to the command.
In some embodiments, the force feedback garment 3 provides a user-defined force function.
It should be noted that, in order to enhance the user experience, in some special scenarios, the real force F cannot be applied to the user completely, such as wind force when the character drives in the video: when the vehicle speed reaches 300 yards, the pressure caused by the influence of wind force is too great, and the great pressure causes the discomfort of the user, and the actual force F needs to be reduced, and the actual force F at the vehicle speed of 300 yards is represented by a force (for example, the force F at the vehicle speed of 100 yards) similar to the force F. This force approximating F is now a user-defined standard force.
Further, there is a retarding force, i.e., the force range of the area will change after the force is applied. In human muscle or ligament tissue, when subjected to impact or stretching, the initial stress may cause localized injury, followed by an inflammatory response and repair process that progressively enlarges the affected area. In addition, for example, wear and degenerative changes may occur in human articular cartilage subjected to load for a long period of time, and this is also an example of a delayed expansion of the range of influence of stress.
It will be appreciated that the force-collecting garment 1 to force-feedback garment 3, while theoretically simulating the overall motion experience of the actor, should take into account that most users are sitting and standing without a large range of motion. Therefore, in order to avoid that some original force is directly transmitted to a user to bring abrupt sense to the user, a plurality of optimized templates which are in line with the actual body sense of a viewer and are not abrupt are provided as much as possible.
For example, running exercise, actors are realized through whole body exercise, and considering the user viewing experience, force information from the lower arm to the wrist and the lower leg to the ankle can be simplified to remove redundant force information.
Some optimized templates, for example, can be templates that mobilize a user for micro-motion and stretching: some small degree of articulation and muscle stretching may be performed while sitting, such as neck rotation, shoulder cocking, wrist rotation, ankle pumping, and lumbar twisting. When standing, the user can try to stand on the tiptoe, squat and stand up (if the space allows), and the leg can exchange the balance exercise of the gravity center, or the wall backrest is bent and stretched in front and back of the legs.
That is, the body of the user is kept alive in a relatively static state, the discomfort caused by long-time immobility is relieved, and the physical quality is improved to a certain extent.
In the embodiment of the invention, the stress information of the actors is truly captured through the force collection clothing, the stress information is edited and processed by the processing end, and the processed force data and the video are synthesized. Haptic equipment on the force feedback clothing is matched through force collection clothing, processed force data are fed back to the user, force feedback received in the video is simulated, force is accurately felt at each part of the user, interaction experience of force on a character in the video is simulated more truly, and immersive experience of more details of the user is enhanced. Further, a new video watching system is created, and the video watching system has expansibility, so that the service life of the video is prolonged, and meanwhile, the user viscosity is increased. Meanwhile, the user can select a role visual angle according to the identification ID, different experiences are brought to the user for watching the same video each time, the user can repeatedly experience the watching of the video for multiple times, and the service life of the video is further prolonged.
Corresponding to the virtual-real combined force processing system provided by the embodiment of the present invention, referring to fig. 9, a flowchart of a virtual-real combined force processing method provided by the embodiment of the present invention is shown, where the method is applied to the force processing system, and the method includes:
Step S901: and collecting coordinate information of each preset vertex and stress information of each preset vertex under the initial posture of the actor by using the force collecting clothing, and initializing to obtain initial coordinate information and initial stress information.
In the specific implementation step S901, the actor wears the force collecting garment, and the force collecting garment is used to collect coordinate information of each preset vertex and stress information of each preset vertex when the actor is in an initial posture, then initialize the coordinate information of each preset vertex to obtain initial coordinate information, and initialize the stress information of each preset vertex to obtain initial stress information.
Step S902: and collecting track coordinate information and real-time stress information generated by movement of each preset vertex when the actor moves by using a force collecting element in the force collecting clothing.
It should be noted that, the force collecting garment is worn by the actor, the pose of the actor changes during the movement and receives a lot of force, so that the force collecting element is used to collect the track coordinate information and the real-time stress information generated by each preset vertex movement during the movement of the actor, which can be understood as capturing the movement of the actor.
Step S903: and transmitting the track coordinate information, the real-time stress information and the identification ID of the actor to a processing end.
In the specific implementation process of step S903, the force collecting garment sends the track coordinate information, the real-time stress information, and the identification ID of the actor to the processing end.
It will be appreciated that the identification ID of an actor indicates a character in the video represented by the actor. Multiple roles can be arranged in one video, so that actors of each role can wear force collection clothes to capture actions and forces, a user can switch visual angles of the multiple roles to experience when watching the video, and experience diversity is increased.
Step S904: and acquiring video data by using a processing end, and generating force feedback data corresponding to the identification ID by combining a preset force feedback template, track coordinate information and real-time stress information.
It can be understood that the processing end contains a plurality of prefabricated forces, and the prefabricated forces can be added at a certain time node in the video, wherein the time node does not collect real-time stress information.
That is, the force data may be artificially added in order to extend the force data before a certain time node or to enhance the atmosphere.
The processing end is combined with the operation of the manager to edit the collected real-time stress information and coordinate track information to obtain the manufactured force data, and then the force data and the video are synchronously stored.
Step S905: when the processing end receives a playing instruction containing the target identification ID, searching target force feedback data corresponding to the target identification ID, converting the target force feedback data into a touch signal, and sending the touch signal to the force feedback garment.
The target ID is any ID.
Step S906: and matching the initial coordinate information of each preset target vertex and the initial stress information of each preset target vertex under the initial posture of the user with the initial coordinate information and the initial stress information in the force collection garment respectively by using the force feedback garment.
When the force feedback garment is started, the initial coordinate information of each preset target vertex and the initial stress information of each preset target vertex are matched with the initial coordinate information and the initial stress information in the force collection garment, so that user experience and video synchronization are realized.
Step S907: and receiving the touch signal by using the force feedback clothing, generating an instruction according to the touch signal, and sending the instruction to a target force feedback element in the force feedback clothing so that the target force feedback element outputs the force corresponding to the instruction.
The haptic signal receiving device is used for receiving the haptic signal sent by the processing end, the force feedback clothing generates instructions according to the haptic signal, the instructions are distributed to force feedback elements at different positions, force feedback conforming to films is triggered, and real and fine interaction experience is provided for users.
In the embodiment of the invention, the stress information of the actors is truly captured through the force collection clothing, the stress information is edited and processed by the processing end, and the processed force data and the video are synthesized. Haptic equipment on the force feedback clothing is matched through force collection clothing, processed force data are fed back to the user, force feedback received in the video is simulated, force is accurately felt at each part of the user, interaction experience of force on a character in the video is simulated more truly, and immersive experience of more details of the user is enhanced. Further, a new video watching system is created, and the video watching system has expansibility, so that the service life of the video is prolonged, and meanwhile, the user viscosity is increased. Meanwhile, the user can select a role visual angle according to the identification ID, different experiences are brought to the user for watching the same video each time, the user can repeatedly experience the watching of the video for multiple times, and the service life of the video is further prolonged.
Further, referring to fig. 10, an embodiment of the present invention provides a schematic structural diagram of a force collecting garment, where the force collecting garment includes at least a vertex coordinate collector, and a data transmitting module connected to the vertex coordinate collector.
Specifically, the vertex coordinate collector is used for collecting track coordinate information and real-time stress information of each vertex. And the data transmitting module is used for transmitting the data to a processing end in communication connection with the force collection clothing according to the track coordinate information and the real-time stress information.
It will be appreciated that the data transmission module may be in the form of a wire, bluetooth, WIFI, etc., with flexible sensors being most preferred. The data transmission module comprises an element positioning system and a force conversion data module. Wherein the element positioning system is for positioning a force collecting element in a force collecting garment. The force conversion data module is used for converting the collected real-time stress information.
In a specific embodiment, the force collecting garment further comprises a power supply system. The structure of the power supply system is not limited, and includes, but is not limited to, a battery (such as a flexible battery), a charging device, and the like.
It should be noted that the mounting manner of the force collecting member in the force collecting garment includes, but is not limited to, sewing, adhering, etc.
In the embodiment of the invention, a plurality of vertexes are calibrated in advance in the force collection clothing, a plurality of force collection elements are arranged, and the force data are truly and finely collected through the trajectory coordinate information and the real-time stress information of each vertex when the force collection clothing collects the movements of actors, so that the viewing experience of users is improved.
Further, referring to fig. 11, an embodiment of the present invention provides a structural schematic diagram of a force feedback garment, where the force feedback garment at least includes a tactile signal receiver, a data receiving module connected to the tactile signal receiver, and a plurality of force feedback elements connected to the data receiving module.
Specifically, the haptic signal receiver is configured to receive the haptic signal sent by the processing end. And the data receiving module is used for generating an instruction according to the touch signal and sending the instruction to a target force feedback element in the force feedback garment so that the target force feedback element outputs the force corresponding to the instruction.
It is understood that the data receiving module comprises an element positioning system, wherein the element positioning system is used for positioning a force feedback element in the force feedback garment.
In a specific embodiment, the force feedback garment further comprises a power supply system and an adjustment module. The structure of the power supply system is not limited, and includes, but is not limited to, a battery (such as a flexible battery), a charging device, and the like. And the adjusting module is used for receiving an adjusting instruction input by a user and adjusting the degree of the force output by the target force feedback element according to the adjusting instruction.
It is understood that the force feedback element is not limited in manner, and includes vibration, magnetic force, spiral force, pressure, etc.
In the embodiment of the invention, a plurality of target vertexes are calibrated in advance in the force feedback clothing, a plurality of force feedback elements are arranged, force data is truly and finely output through the force collection clothing, immersive detailed experience is brought to a user, the traditional film viewing mode is changed, and interaction experience of a character subjected to force in the film and television world is more truly simulated.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. A force handling system combining virtual and real, the system comprising: force collection clothing, processing end and force feedback clothing;
the processing end is respectively in communication connection with the force collection garment and the force feedback garment;
The force collection garment comprises a plurality of pre-calibrated vertexes, each vertex is correspondingly provided with a force collection element, the force feedback garment is at least provided with a plurality of force feedback elements corresponding to the pre-calibrated target vertexes, and each force feedback element is bound with each force collection element one by one;
The force collection garment is used for initializing coordinate information of each vertex and stress information of each vertex under the initial posture of an actor to obtain initial coordinate information and initial stress information; the force collecting element is used for collecting track coordinate information and real-time stress information generated by the movement of each vertex when the actor moves, and the track coordinate information, the real-time stress information and the identification ID of the actor are sent to the processing end;
the processing end is used for receiving the track coordinate information, the real-time stress information and the identification ID; acquiring video data and combining a preset force feedback template, the track coordinate information and the real-time stress information to generate force feedback data corresponding to the identification ID; when a playing instruction containing a target identification ID is received, searching target force feedback data corresponding to the target identification ID, converting the target force feedback data into a touch signal, and sending the touch signal to the force feedback garment;
The force feedback garment is used for matching initial coordinate information of each target vertex and initial stress information of each target vertex under an initial posture of a user with the initial coordinate information and the initial stress information in the force collection garment respectively; and receiving the touch signal, generating an instruction according to the touch signal, and sending the instruction to a target force feedback element so that the target force feedback element outputs force corresponding to the instruction, wherein the target force feedback element is any force feedback element.
2. The system of claim 1, wherein the coordinate information of each of the vertices and the force information of each of the vertices in the initial pose of the actor are initialized to obtain the initial coordinate information and the initial force information of the force-collecting garment, specifically for:
acquiring coordinates of each preset joint on an actor when the actor is in an initial posture;
for each vertex, acquiring the weight of each preset skeleton to the current vertex;
According to the corresponding target transformation matrix of each preset joint, calculating a transformation matrix corresponding to each preset skeleton;
calculating to obtain coordinate information of the current vertex based on the weight, the transformation matrix and the coordinates of the preset joint;
Marking the coordinate information of each vertex as initial coordinate information when the actor is in an initial posture;
the force received by the force feedback elements at each of the vertices when the actor is in an initial pose is marked as initial stress information.
3. The system of claim 1, wherein the force-collecting garment is configured to collect trajectory coordinate information generated by each of the vertex movements of the actor as the actor moves using the force-collecting element, in particular:
Collecting a plurality of continuous motion coordinate information of each vertex and a time stamp corresponding to each motion coordinate information when the actor moves by using the force collecting element;
and calculating the track coordinate information of each vertex based on all the continuous motion coordinate information and the corresponding time stamp of each motion coordinate information.
4. The system of claim 1, wherein the processing end for obtaining video data and generating the force feedback data corresponding to the identification ID by combining a preset force feedback template, the track coordinate information and the real-time stress information is specifically configured to:
Acquiring video data;
Loading the video data, the real-time stress information and preset force in a preset force feedback template to a front-end interactive interface;
Responding to the enhancement operation of the manager on the first stress information in the real-time stress information at the front-end interactive interface, and increasing the force degree of the first stress information to obtain first video data;
and/or;
Responding to weakening operation of a manager on second stress information in the real-time stress information at the front-end interactive interface, and reducing the force degree of the second stress information to obtain second video data;
and/or;
responding to the adding operation of the manager on the front-end interactive interface for a target video picture in the video data, and adding the preset force at a time stamp corresponding to the target video picture to obtain third video data;
synthesizing the first video data and/or the second video data and/or the third video data to obtain force feedback data;
and carrying out association storage on the force feedback data and the identification ID.
5. The system according to claim 1, wherein the force feedback garment generates instructions from the haptic signal and sends the instructions to a target force feedback element, in particular for:
obtaining target force feedback elements corresponding to the tactile signals from all force feedback elements;
generating a corresponding instruction for each target force feedback element according to the haptic signal;
And sending each instruction to the corresponding target force feedback element.
6. The system of claim 1, wherein the number of vertices on the force collection garment is the same as the number of target vertices on the force feedback garment.
7. A method of virtual-real combined force processing, wherein the method is applied to the force processing system of any one of claims 1-6, the method comprising:
Collecting coordinate information of each preset vertex and stress information of each preset vertex under the initial posture of an actor by using force collecting clothing, and initializing to obtain initial coordinate information and initial stress information;
Utilizing a force collecting element in the force collecting clothing to collect track coordinate information and real-time stress information generated by movement of each preset vertex when the actor moves;
transmitting the track coordinate information, the real-time stress information and the identification ID of the actor to a processing end;
acquiring video data by using the processing end, and generating force feedback data corresponding to the identification ID by combining a preset force feedback template, the track coordinate information and the real-time stress information;
When the processing end receives a playing instruction containing a target identifier ID, searching target force feedback data corresponding to the target identifier ID, converting the target force feedback data into a touch signal, and sending the touch signal to a force feedback garment; wherein, the target identification ID is any one of the identification IDs;
The initial coordinate information of each preset target vertex and the initial stress information of each preset target vertex under the initial posture of a user are respectively matched with the initial coordinate information and the initial stress information in the force collecting clothing by using the force feedback clothing;
and receiving the touch signal by using the force feedback clothing, generating an instruction according to the touch signal, and sending the instruction to a target force feedback element in the force feedback clothing so that the target force feedback element outputs a force corresponding to the instruction.
8. A force-collecting garment, wherein the force-collecting garment is worn by an actor, the force-collecting garment comprising at least a plurality of pre-calibrated vertices and force-collecting elements corresponding to each of the vertices, a vertex coordinate collector, a data-transmitting module, and a data-processing module; the system comprises a data transmission module, a data processing module and a force collection element, wherein each vertex is correspondingly provided with the force collection element, the vertex coordinate collector is connected with each force collection element, the data transmission module is connected with the vertex coordinate collector, and the data processing module is connected with the data transmission module;
The force collecting element is used for collecting track coordinate information and real-time stress information generated by the movement of each vertex when the actor moves;
The vertex coordinate collector is used for collecting the track coordinate information and the real-time stress information collected by the force collecting elements arranged at the positions of the prespecified vertices;
the data processing module is used for initializing coordinate information of each vertex and stress information of each vertex under the initial posture of the actor to obtain initial coordinate information and initial stress information;
The data transmission module is used for positioning the force collecting element; converting the collected real-time stress information to obtain converted real-time stress information; transmitting the initial coordinate information and the initial stress information to a force feedback garment connected with the force collection garment; and sending the identification ID of the actor, the track coordinate information and the converted real-time stress information to a processing end in communication connection with the force collection clothing, generating a touch signal by the processing end according to the identification ID of the actor, the track coordinate information and the converted real-time stress information, and sending the touch signal to the force feedback clothing, so that the force feedback clothing outputs force based on the touch signal.
9. The force feedback clothing is characterized in that the force feedback clothing is worn by a user, the force feedback clothing is respectively connected with a processing end and a force collecting clothing, and the force feedback clothing at least comprises a plurality of target vertexes calibrated in advance and target force feedback elements, a touch signal receiver, a data receiving module, a matching module and an adjusting module, wherein the target force feedback elements, the touch signal receiver, the data receiving module, the matching module and the adjusting module correspond to each target vertex; wherein, each target vertex is correspondingly provided with one target force feedback element; the tactile signal receiver is connected with the data receiving module; the data receiving module is connected with the target force feedback element, and the target force feedback element is respectively connected with the matching module and the adjusting module;
The target force feedback element is used for collecting initial coordinate information of each target vertex and initial stress information of each target vertex under the initial posture of the user;
The matching module is used for matching the initial coordinate information of each target vertex and the initial stress information of each target vertex with the initial coordinate information and the initial stress information sent by the force collection clothing respectively; the initial coordinate information and the initial stress information are obtained by the force collection clothing by collecting coordinate information of each vertex and stress information of each vertex under the initial posture of an actor and initializing;
the haptic signal receiver is configured to receive a haptic signal sent by the processing end, where the haptic signal is generated by the processing end receiving the identification ID of the actor, the track coordinate information, and the converted real-time stress information sent by the force collecting garment;
the data receiving module is used for positioning the target force feedback element, generating an instruction according to the touch signal and sending the instruction to the target force feedback element in the force feedback garment so that the target force feedback element outputs force corresponding to the instruction;
and the adjusting module is used for receiving the adjusting instruction input by the user and adjusting the degree of the force output by the target force feedback element according to the adjusting instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410821472.7A CN118394222B (en) | 2024-06-24 | 2024-06-24 | Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410821472.7A CN118394222B (en) | 2024-06-24 | 2024-06-24 | Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118394222A CN118394222A (en) | 2024-07-26 |
CN118394222B true CN118394222B (en) | 2024-09-06 |
Family
ID=91987465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410821472.7A Active CN118394222B (en) | 2024-06-24 | 2024-06-24 | Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118394222B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110652722A (en) * | 2018-06-29 | 2020-01-07 | 深圳市掌网科技股份有限公司 | Interactive system for generating force feedback effect in ball game room |
CN116107437A (en) * | 2023-04-13 | 2023-05-12 | 湖南快乐阳光互动娱乐传媒有限公司 | Virtual-real combined force feedback method and system, force feedback garment and related equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI434718B (en) * | 2006-12-07 | 2014-04-21 | Cel Kom Llc | Tactile wearable gaming device |
US11205350B2 (en) * | 2019-05-15 | 2021-12-21 | International Business Machines Corporation | IoT-driven proprioceptive analytics with automated performer feedback |
CN113296605B (en) * | 2021-05-24 | 2023-03-17 | 中国科学院深圳先进技术研究院 | Force feedback method, force feedback device and electronic equipment |
-
2024
- 2024-06-24 CN CN202410821472.7A patent/CN118394222B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110652722A (en) * | 2018-06-29 | 2020-01-07 | 深圳市掌网科技股份有限公司 | Interactive system for generating force feedback effect in ball game room |
CN116107437A (en) * | 2023-04-13 | 2023-05-12 | 湖南快乐阳光互动娱乐传媒有限公司 | Virtual-real combined force feedback method and system, force feedback garment and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN118394222A (en) | 2024-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7366196B2 (en) | Widespread simultaneous remote digital presentation world | |
JP7001841B2 (en) | Image processing methods and equipment, image devices and storage media | |
CN106110627B (en) | Sport and Wushu action correction device and method | |
Capin et al. | Virtual human representation and communication in VLNet | |
CN109710057B (en) | A method and system for dynamic reproduction of virtual reality | |
CN109671141B (en) | Image rendering method and device, storage medium and electronic device | |
CN112198959A (en) | Virtual reality interaction method, device and system | |
CN110637324B (en) | Three-dimensional data system and three-dimensional data processing method | |
CN113298858A (en) | Method, device, terminal and storage medium for generating action of virtual image | |
CN107341351A (en) | Intelligent body-building method, apparatus and system | |
CN107930048B (en) | Space somatosensory recognition motion analysis system and motion analysis method | |
CN107911737A (en) | Methods of exhibiting, device, computing device and the storage medium of media content | |
CN114510150A (en) | Experience system of virtual digital world | |
US20180261120A1 (en) | Video generating device, method of controlling video generating device, display system, video generation control program, and computer-readable storage medium | |
CN103019386A (en) | Method for controlling human-machine interaction and application thereof | |
JP7078577B2 (en) | Operational similarity evaluation device, method and program | |
CN107469315A (en) | A kind of fighting training system | |
CN118394222B (en) | Virtual-real combined force processing system, method, force collecting clothing and force feedback clothing | |
WO2022209220A1 (en) | Image processing device, image processing method, and recording medium | |
CN212789785U (en) | Household VR cinema system | |
Dallaire-Côté et al. | Animated self-avatars for motor rehabilitation applications that are biomechanically accurate, low-latency and easy to use | |
Destelle et al. | A multi-modal 3d capturing platform for learning and preservation of traditional sports and games | |
Kawahara et al. | Transformed human presence for puppetry | |
CN110119197A (en) | A kind of holographic interaction system | |
CN211180839U (en) | A kind of sports teaching equipment and sports teaching system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |