CN108230429A - Real-time whole body posture reconstruction method based on head and two-hand positions and posture - Google Patents
Real-time whole body posture reconstruction method based on head and two-hand positions and posture Download PDFInfo
- Publication number
- CN108230429A CN108230429A CN201611150372.8A CN201611150372A CN108230429A CN 108230429 A CN108230429 A CN 108230429A CN 201611150372 A CN201611150372 A CN 201611150372A CN 108230429 A CN108230429 A CN 108230429A
- Authority
- CN
- China
- Prior art keywords
- head
- positive direction
- waist
- hands
- refers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000002156 mixing Methods 0.000 claims abstract description 15
- 210000003128 head Anatomy 0.000 claims description 88
- 230000003068 static effect Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 210000003141 lower extremity Anatomy 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000003139 buffering effect Effects 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000009191 jumping Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000149 penetrating effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 11
- 230000036544 posture Effects 0.000 description 10
- 230000009471 action Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种基于头部和双手位置及姿态的实时全身姿态重建方法,采用反向动力学方案进行上半身重建,即根据头部位置,首先估算腰部位置及双肩位置,之后根据输入的双手位置采用反向动力学方法推算得到双肘位置;然后通过预设动画混合的方式进行下半身重建,最后依次将所有关节的位置映射到重建后的人物的上、下半身上完成全身姿态的重建。
A real-time whole-body posture reconstruction method based on the position and posture of the head and hands, using the inverse dynamics scheme to reconstruct the upper body, that is, according to the head position, first estimate the waist position and the shoulder position, and then use the reverse dynamics method according to the input hands position The position of the elbows is calculated by the dynamic method; then the lower body is reconstructed by means of preset animation blending, and finally the positions of all joints are mapped to the reconstructed upper and lower body of the character in turn to complete the reconstruction of the whole body posture.
Description
技术领域technical field
本发明涉及的是一种虚拟现实领域的技术,具体是一种仅根据用户双手及头部的位置和姿态采用反向动力学及动画混合的方式,分别对用户的上半身及下半身在虚拟现实中进行实时身体姿态重建的一种技术。The present invention relates to a technology in the field of virtual reality, specifically a method of combining inverse dynamics and animation based on the positions and postures of the user's hands and head, respectively to simulate the user's upper body and lower body in virtual reality. A technique for real-time body pose reconstruction.
背景技术Background technique
随着硬件规格的不断提高,虚拟现实技术又一次变得火热起来,并被业内人士逐渐看好。虽然目前设备的普及量无法让用户们感受到虚拟现实的魅力,但是相信当用户足够多且内容足够丰富时,虚拟现实会焕发出该有的魔力。With the continuous improvement of hardware specifications, virtual reality technology has become hot again, and is gradually favored by people in the industry. Although the current popularization of equipment cannot make users feel the charm of virtual reality, it is believed that when there are enough users and the content is rich enough, virtual reality will radiate its magic.
在各类的虚拟现实应用中,将人体重建入虚拟世界中的需求经常会被用到。比如,在多人虚拟现实中,每个用户都希望看到队友以一个人形出现在虚拟世界中,而非目前因受限于追踪设备而采用的一个悬浮的头和两只悬浮的手,这对于追求沉浸感的虚拟现实领域来说是需要进步的地方。因此,如何根据少量低维度的传感器信息来尽可能的准确实时重建人体动作成为了业界一个新的研究点。In various virtual reality applications, the need to reconstruct the human body into the virtual world is often used. For example, in multiplayer virtual reality, each user hopes to see his teammates appear in the virtual world in a human form, instead of a suspended head and two suspended hands currently used due to the limitation of tracking equipment. For the field of virtual reality that pursues immersion, it is a place where progress is needed. Therefore, how to reconstruct human body movements as accurately as possible and in real time based on a small amount of low-dimensional sensor information has become a new research point in the industry.
在虚拟现实火爆之前,对于基于少量传感器的人体姿态重建算法就已经有很多的相关研究工作,其中:比较流行的方案就是基于大数据驱动的方式,根据实时给定的传感器姿态数据来搜索匹配在姿态数据库中已有的人体姿态,再根据能量函数来使搜索结果,实际输入和动作相关性之间达到一个最优的权衡条件,最终将重建的姿态呈现出来。Before the popularity of virtual reality, there have been a lot of related research work on the human body pose reconstruction algorithm based on a small number of sensors, among which: the more popular scheme is based on the big data-driven method, according to the real-time given sensor pose data to search and match in The existing human body poses in the pose database, and then according to the energy function, the search results, the actual input and the action correlation reach an optimal trade-off condition, and finally the reconstructed pose is presented.
发明内容Contents of the invention
本发明针对现有技术中重建结果很大程度上依赖于数据库中的运动数据数量,无法合成或运动数据库中不存在的姿态等缺陷,提出一种基于头部和双手位置及姿态的实时全身姿态重建方法,只需要实时提供双手的位置和姿态以及头部的位置和姿态,上下半身分别采用反向动力学及动画混合的方式进行实时重建。In view of the defects in the prior art that the reconstruction results largely depend on the amount of motion data in the database, which cannot be synthesized or poses that do not exist in the motion database, the present invention proposes a real-time whole-body pose based on the position and pose of the head and hands The reconstruction method only needs to provide the position and posture of the hands and the head in real time, and the upper and lower body are reconstructed in real time by means of inverse dynamics and animation mixing respectively.
本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:
本发明采用反向动力学方案进行上半身重建,即根据头部位置,首先估算腰部位置及双肩位置,之后根据输入的双手位置采用反向动力学方法推算得到双肘位置;然后通过预设动画混合的方式进行下半身重建,最后依次将所有关节的位置映射到重建后的人物的上、下半身上完成全身姿态的重建。The present invention adopts the inverse dynamics scheme to reconstruct the upper body, that is, according to the position of the head, first estimate the position of the waist and the position of the shoulders, and then use the inverse dynamics method to calculate the position of the elbows according to the input position of the hands; The lower body is reconstructed in the same way, and finally the positions of all joints are mapped to the upper and lower body of the reconstructed character in turn to complete the reconstruction of the whole body pose.
所述的上半身重建是指:通过虚拟现实控制器完成标定后,采用传感器记录缓存过程中所采集到的数据计算得到作为特征值的头部的平均正方向、双手连线的垂直平分线平均方向、两手连线与身体正方向的夹角以及两手连线与头部所在垂直平面的夹角,并得到当前人体正方向;然后根据头部位置依次计算得到腰部高度、腰部的绝对位置、左及右肩位置,并根据反向动力学方法推算得到双肘位置。The upper body reconstruction refers to: after the calibration is completed by the virtual reality controller, the average positive direction of the head and the average direction of the vertical bisector of the line connecting the hands are obtained as feature values by using the data collected in the sensor recording and buffering process to calculate , the angle between the line connecting the two hands and the positive direction of the body, and the angle between the line connecting the two hands and the vertical plane where the head is located, and obtain the current positive direction of the human body; then calculate the waist height, the absolute position of the waist, the left and right The position of the right shoulder and the position of the elbows are calculated according to the inverse dynamics method.
所述的下半身重建是指:采用传感器记录缓存过程中所采集到的数据计算得到头部的瞬时速度和双手臂的摆动姿态,从而对预设的下肢动画进行混合,并经过合法性检测完成下半身重建。The lower body reconstruction refers to: use the data collected in the process of sensor recording and buffering to calculate the instantaneous speed of the head and the swinging posture of the arms, so as to mix the preset lower body animation, and complete the lower body after legality testing. reconstruction.
本发明具体包括以下步骤:The present invention specifically comprises the following steps:
步骤一,用户手持虚拟现实控制器,头戴头戴显示器,摆出标准校正姿势(T pose)以标定人体相关参数hs及ws,从而适应不同身高臂长的人来使用该重建方案。Step 1: The user holds a virtual reality controller, wears a head-mounted display, and poses in a standard calibration posture (T pose) to calibrate the relevant parameters h s and w s of the human body, so as to adapt to people of different heights and arm lengths to use the reconstruction scheme.
所述的标准校正姿势是指:直立、两臂侧平举、头部向正前方。The standard correction posture refers to: standing upright, raising both arms sideways, and facing the head straight ahead.
所述的人体相关参数其中:ph指头部位置,hm指需要映射到虚拟场景中的人物模型高度,pr指右手位置,pl指左手位置,wm指需要映射到虚拟场景中的人物模型臂展。The human body-related parameters described Among them: p h refers to the position of the head, h m refers to the height of the character model that needs to be mapped to the virtual scene, p r refers to the position of the right hand, p l refers to the position of the left hand, and w m refers to the arm span of the character model that needs to be mapped to the virtual scene.
步骤二,通过传感器记录采集到的数据,并实时统计辅助参量。Step 2, record the collected data through the sensor, and count the auxiliary parameters in real time.
所述的数据包括:头戴显示器所给出的头部位置及头部姿态,手持控制器所给出的双手的位置及双手姿态。The data includes: the head position and head posture given by the head-mounted display, and the position and posture of both hands given by the hand-held controller.
优选地,所述的头部姿态及双手姿态以四元数的方式记录。Preferably, the head pose and hands pose are recorded in the form of quaternions.
所述的辅助参量包括但不限于:前若干帧某追踪体的速度大小及速度方向,可由位置信息和时间间隔求出;前若干帧某追踪体的加速度大小及方向,可由速度信息及时间间隔求出。The auxiliary parameters include but are not limited to: the velocity and direction of a certain tracking body in the previous several frames can be obtained from the position information and time interval; the acceleration and direction of a certain tracking object in the previous several frames can be obtained from the velocity information and the time interval Find out.
所述的辅助参量根据本发明所应用的场景可以进行手动调整,如需要对头部是否移动做出较敏感的判断时,可以在本阶段求头部追踪体在前2帧的速度大小来判断,当要做出较迟缓的判断时,可以改成根据头部追踪体在前10帧的速度大小来判断。The auxiliary parameters can be manually adjusted according to the scene where the present invention is applied. If a more sensitive judgment needs to be made on whether the head moves, it can be judged by calculating the speed of the head tracking body in the first two frames at this stage. , when you want to make a slower judgment, you can change it to judge according to the speed of the head tracking body in the first 10 frames.
步骤三,人体正方向,即人的胸部正对方向的估计,由于相同的一组头部及双手的位置及姿态信息可能会对应多种人体正方向,因此当人的正方向保持不变时,根据最有可能的正方向情况进行估计,具体包括:Step 3, the positive direction of the human body, that is, the estimation of the direction facing the chest of the person. Since the same set of head and hands position and posture information may correspond to a variety of positive directions of the human body, when the positive direction of the person remains unchanged , estimated based on the most likely positive direction, including:
3.1)分别计算头部的平均正方向和双手连线的垂直平分线平均方向具体为:其中:指此帧之前所累积的头部平均正方向,hfp指的是头部正方向在世界坐标系中xz平面上的投影,n指所统计的是n帧的头部平均正方向;3.1) Calculate the average positive direction of the head separately The average direction of the perpendicular bisector of the line connecting the two hands Specifically: in: Refers to the average positive direction of the head accumulated before this frame, hf p refers to the projection of the positive direction of the head on the xz plane in the world coordinate system, n refers to The statistics are the average positive direction of the head of n frames;
其中:指此帧之前所累积的双手连线的垂直平分线正方向,lhp及rhp分别指左手与右手在本帧中的位置,(lhp-rhp)p指的是双手之间的连线在世界坐标系中xz平面上的投影,n指所统计的是n帧的平均正方向,up指的是世界坐标系中向上的方向,即方向(0,0,1)。 in: Refers to the positive direction of the vertical bisector of the line connecting the two hands accumulated before this frame, lhp and rhp respectively refer to the positions of the left hand and right hand in this frame, (lhp-rhp) p refers to the line between the two hands in world coordinates The projection on the xz plane in the system, n refers to The statistics are the average positive direction of n frames, and up refers to the upward direction in the world coordinate system, that is, the direction (0, 0, 1).
3.2)进一步计算两手连线与身体正方向的夹角across以及两手连线与头部所在垂直平面的夹角alinear,具体为: 其中:hp代表本帧中头部的位置,lhp及rhp分别表示本帧中左手及右手的位置,下标p代表对该向量求在世界坐标系中xz平面上的投影。3.2) Further calculate the angle a cross between the line connecting the hands and the positive direction of the body and the angle a linear between the line connecting the hands and the vertical plane where the head is located, specifically: Among them: hp represents the position of the head in this frame, lhp and rhp respectively represent the positions of the left hand and right hand in this frame, and the subscript p represents the projection of the vector on the xz plane in the world coordinate system.
3.3)根据across与alinear以及和确定当前人体正方向,具体包括:3.3) According to a cross and a linear and and Determine the current positive direction of the human body, including:
3.3.1)当根据步骤二中的速度统计量发现,在前5帧中,头部和手部均处于静止的状态,则需要确认,across是否在85°到95°或者alinear是否在-5°到5°之间,当是,则本帧中身体的正方向 3.3.1) According to the speed statistics in step 2, it is found that in the first 5 frames, the head and hands are in a static state, it is necessary to confirm whether a cross is at 85° to 95° or whether a linear is at Between -5° and 5°, if yes, the positive direction of the body in this frame
3.3.2)当在近几帧中,头部处于静止状态,但双手处于非静止状态,则头部的正方向将会比双手连线的垂直平分线更加可靠,所以,当与之间的夹角小于5°时,则本帧中身体的正方向否则 3.3.2) When the head is in a static state in recent frames, but the hands are in a non-stationary state, the positive direction of the head will be more reliable than the vertical bisector of the line connecting the hands. Therefore, when and When the angle between is less than 5°, the positive direction of the body in this frame otherwise
3.3.3)当在近几帧中,头部处于非静止状态,但双手处于静止状态,则头部的平均正方向将不再可靠,经观察发现,在这种情况下,多数是用户处于静止状态下四处张望,因此,此时不随意更改身体的正方向,即本帧中身体的正方向等于上一帧中身体的正方向,bfc=bfl 3.3.3) When the head is in a non-stationary state in recent frames, but the hands are in a stationary state, the average positive direction of the head It will no longer be reliable. It has been observed that in this case, most of the users are looking around in a static state. Therefore, the positive direction of the body is not arbitrarily changed at this time, that is, the positive direction of the body in this frame is equal to that in the previous frame. The positive direction of the body, bf c =bf l
3.3.4)当在近几帧中,头部和双手均处于非静止的状态,则没有稳定的输入来判断人体的正方向,此时,仍继续保持之前的人体正方向,即本帧中身体的正方向等于上一帧中身体的正方向,bfc=bfl。3.3.4) When the head and both hands are in a non-stationary state in recent frames, there is no stable input to judge the positive direction of the human body. At this time, the previous positive direction of the human body is still maintained, that is, in this frame The positive direction of the body is equal to the positive direction of the body in the previous frame, bf c =bf l .
优选地,对当前人体正方向bfa进行线性插值,以保证人体的正方向不会因为不稳定的输入信号而产生突变的情况,即bfa=bfl*(1-f)+bfc*f,其中:f是混合参数。Preferably, a linear interpolation is performed on the current positive direction of the human body bf a to ensure that the positive direction of the human body will not be mutated due to unstable input signals, that is, bf a =bf l *(1-f)+bf c * f, where: f is a mixing parameter.
步骤四,构建上半身躯干的左肩位置lse、右肩位置rse以及腰部高度beh,完成对上半身躯干部位的重建,具体包括:Step 4, construct the left shoulder position lse, the right shoulder position rse and the waist height be h of the upper body to complete the reconstruction of the upper body torso, specifically including:
4.1)根据头部的位置计算得到腰部高度,即其中:hh为头部的高度,hm为需要映射到虚拟场景中的人物模型高度,muh指人物模型头部到腰部的距离,hmh为该人物模型可下蹲到的最小头部高度,以防止腰部的高度低于脚部。4.1) Calculate the waist height according to the position of the head, namely Among them: h h is the height of the head, h m is the height of the character model that needs to be mapped to the virtual scene, m uh refers to the distance from the head of the character model to the waist, h mh is the minimum head that the character model can squat to height to prevent the waist from being lower than the feet.
4.2)根据头部水平位置计算腰部的绝对位置be;优选地,采用插值的方式减缓头部微小的移动,并以插值后的头部水平位置参与计算。4.2) Calculate the absolute position be of the waist according to the horizontal position of the head; preferably, use interpolation to slow down the slight movement of the head, and use the interpolated horizontal position of the head to participate in the calculation.
4.3)根据腰部的绝对位置be分别计算左肩位置lse、右肩位置rse,具体为:lse=me-offset;rse=me+offset,其中:factors是肩部高度的调整参数,bf是在步骤三中计算得到的身体正方向,factorw是肩部宽度的调整参数,me=be+(hp-be)/factors; 4.3) Calculate the left shoulder position lse and right shoulder position rse respectively according to the absolute position be of the waist, specifically: lse=me-offset; rse=me+offset, wherein: factor s is the adjustment parameter of the shoulder height, and bf is the adjustment parameter in step The positive direction of the body calculated in the third step, factor w is the adjustment parameter of shoulder width, me=be+(hp-be)/factor s ;
优选地,当用户的目光向下看时,即用户可能在观察自己脚边的东西并想弯腰把它捡起来;或是用户的头部具有沿竖直方向的加速度时,则激活下蹲状态,即腰部在水平方向的位置被冻结;而当头部不再有竖直方向的加速度时,记录头部与腰部之间的偏移量,从而保证当用户在下蹲状态下行走时能准确推断腰部的位置;当用户重新站立,即头戴显示器到达标定高度时,将所述偏移量归零。Preferably, the squat is activated when the user's eyes are looking down, that is, the user may be looking at something at his feet and wants to bend down to pick it up; or when the user's head has acceleration in the vertical direction state, that is, the position of the waist in the horizontal direction is frozen; and when the head no longer has vertical acceleration, the offset between the head and the waist is recorded, so as to ensure that the user can accurately walk when squatting. The position of the waist is inferred; when the user stands up again, ie the HMD reaches the calibrated height, the offset is zeroed out.
步骤五,根据手持控制器的位置lhp及rhp和两个肩部的位置lse及rse,采用常用的反向动力学方法来推算得到双肘部的位置,完成上半身的重建过程。Step 5, according to the positions lhp and rhp of the handheld controller and the positions lse and rse of the two shoulders, the commonly used inverse dynamics method is used to calculate the positions of the elbows to complete the reconstruction process of the upper body.
步骤六,通过预设的下肢动画映射作为重建模型的下半身,并对映射后的结果进行合法性检测,然后依次将所有关节的位置映射到重建后的人物的上、下半身上,完成本帧的重建。Step 6: Use the preset animation mapping of lower limbs as the lower body of the reconstructed model, and check the validity of the mapped results, and then map the positions of all joints to the reconstructed upper and lower body of the character in turn to complete the frame. reconstruction.
所述的预设的下肢动画是指:预先准备若干可循环动画,包括但不限于:前行、左前行、右前行、左行、右行、左后行、右后行、后行;之后根据用户的运动方向及速度采用常用的动画混合技术对可循环动画依据不同权重进行混合。The preset animation of the lower limbs refers to: preparing several cyclic animations in advance, including but not limited to: forward, left forward, right forward, left, right, left backward, right backward, backward; According to the direction and speed of the user's movement, the commonly used animation mixing technology is used to mix the loopable animation according to different weights.
所述的合法性检测,即对映射后的下肢位置进行合法性检测,判断动画混合的结果是否合法,即下肢是否发生了穿透到其它物体内部的情况;当不合法时对下肢采用与上肢相同的反向动力学方法进行计算,以此来避免穿透地面或者跳起时脚部被拉长的错误。The legitimacy detection is to detect the legitimacy of the mapped lower limb position, and judge whether the result of animation mixing is legal, that is, whether the lower limb has penetrated into other objects; The same inverse kinematics calculations are performed to avoid the mistake of elongating the foot when penetrating the ground or jumping.
所述的可循环动画,优选通过双手的摆动方向决定是先迈左脚还是先迈右脚,当用户在走动的一刹那前摆左臂,后摆右臂,则动画开始于先出右脚;当检测到前摆右臂,后摆左臂或未检测到合法的摆臂动作时,则动画开始于先出左脚。The loopable animation described above is preferably decided by the swing direction of both hands whether to step the left foot or the right foot first. When the user swings the left arm forward and the right arm backward at the moment of walking, the animation starts with the right foot first; When a forward right arm is detected, a backward left arm is detected, or a legal arm swing is not detected, the animation starts with the left foot out first.
本发明涉及一种实现上述方法的系统,包括:传感器数据缓存及统计模块、身体正方向估计模块、反向动力学求解模块、上半身重建模块以及下半身重建模块,其中:传感器数据缓存及统计模块将来自传感器的数据缓存和统计后输出至身体正方向估计模块和下半身重建模块,身体正方向估计模块根据统计信息估算得到人体正方向并输出至上半身重建模块,上半身重建模块根据人体正方向得到腰部高度、腰部的绝对位置、左及右肩位置并输出至反向动力学求解模块进行上半身重建,下半身重建模块根据统计信息,采用动画混合的方式对预设的可循环动画进行混合,并将混合结果输出至反向动力学求解模块进行合法性检测以及所有关节的映射。The present invention relates to a system for implementing the above method, including: sensor data cache and statistics module, body forward direction estimation module, reverse dynamics solution module, upper body reconstruction module and lower body reconstruction module, wherein: sensor data cache and statistics module will The data from the sensor is buffered and statistically output to the body positive direction estimation module and the lower body reconstruction module. The body positive direction estimation module estimates the human body's positive direction based on statistical information and outputs it to the upper body reconstruction module. The upper body reconstruction module obtains the waist height according to the human body's positive direction , the absolute position of the waist, and the positions of the left and right shoulders are output to the inverse dynamics solving module for upper body reconstruction, and the lower body reconstruction module uses the animation mixing method to mix the preset cyclic animation according to the statistical information, and mixes the mixed results Output to the inverse dynamics solving module for legality detection and mapping of all joints.
技术效果technical effect
与现有技术相比,本发明在考虑了虚拟现实系统重建人体的需求后,实现了一个可实时重建人体的基于极少量传感器的重建方法。本发明放弃了脚部的追踪信息来适应当今流行的虚拟现实设备所提供的追踪配置,主要原因是大部分虚拟现实应用对于脚部的动作并没有大量的依赖性,也就是说自然的双脚动画对于一般的虚拟现实社交等应用来说已经足够,上半身的准确重建比下半身更为重要,而且在虚拟现实应用中,所有人的双眼都被头戴显示器所遮挡,所以只要重建方案可以重建出足够自然的人体动作,就可以应用于大量常见的虚拟现实应用中。因此,本发明采用了较为准确的反向动力学的方案来准确的还原手臂动画,而采用了动画混合的技术来快速的还原腿部动画。Compared with the prior art, the present invention realizes a reconstruction method based on a very small number of sensors that can reconstruct the human body in real time after considering the requirement of the virtual reality system to reconstruct the human body. This invention abandons the tracking information of the feet to adapt to the tracking configuration provided by the popular virtual reality devices. The main reason is that most virtual reality applications do not have a lot of dependence on the movements of the feet, that is to say, natural feet Animation is enough for general virtual reality social applications. The accurate reconstruction of the upper body is more important than the lower body, and in virtual reality applications, everyone's eyes are blocked by the head-mounted display, so as long as the reconstruction scheme can reconstruct Natural enough human motion can be applied to a large number of common virtual reality applications. Therefore, the present invention adopts a relatively accurate inverse dynamics scheme to accurately restore the arm animation, and adopts animation mixing technology to quickly restore the leg animation.
附图说明Description of drawings
图1为本发明提出的模拟流程图;Fig. 1 is the simulation flowchart that the present invention proposes;
图2为本发明中,对头部的平均正方向的示意图;Fig. 2 is a schematic diagram of the average positive direction of the head in the present invention;
图3为本发明中,对双手连线的垂直平分线方向的示意图;Fig. 3 is in the present invention, the schematic diagram to the perpendicular bisector direction of two-hand connection line;
图4为本发明中,对角across的标示图;Fig. 4 is in the present invention, the sign diagram of diagonal a cross ;
图5为本发明中,对角alinear的标示图;Fig. 5 is the label diagram of diagonal a linear in the present invention;
图6为实施例作的效果展示图;Fig. 6 is the effect demonstration figure that embodiment makes;
图中:a为向前方行走、b为向右前方行走、c为向右方行走、d为向左后方行走、e为向后行走、f为下蹲动作、g为半蹲状态下行走、h为跳跃动作。In the figure: a is walking forward, b is walking forward right, c is walking rightward, d is walking backward left, e is walking backward, f is squatting, g is walking in half squat state, h is the jump action.
具体实施方式Detailed ways
本实施例采用了机器人模型对本发明中提出的重建算法进行展示。该场景中,用户所做的规定范围以内的动作将被实时重建在虚拟现实系统中,在多人应用中,重建的结果会给其它用户以更强的沉浸式体验。In this embodiment, a robot model is used to demonstrate the reconstruction algorithm proposed in the present invention. In this scenario, the user's actions within the specified range will be reconstructed in the virtual reality system in real time. In multi-person applications, the reconstructed results will give other users a stronger immersive experience.
本实施例以下步骤在实验时均基于HTC Vive虚拟现实设备,其中标配包含一个头戴显示器及两个手部控制器,可以捕捉用户的头部和手部的位置及姿态,并将传感器信号通过底层接口OpenVR,转化为头及手在空间坐标系下的位置及姿态。本发明的传感器配置目前为虚拟现实领域高端头显的标准配置,该方案也可兼容Oculus及PSVR。The following steps of this embodiment are all based on the HTC Vive virtual reality device during the experiment, which includes a head-mounted display and two hand controllers as standard, which can capture the position and posture of the user's head and hands, and send sensor signals Through the underlying interface OpenVR, it is converted into the position and posture of the head and hands in the space coordinate system. The sensor configuration of the present invention is currently the standard configuration of high-end head-mounted displays in the field of virtual reality, and the solution is also compatible with Oculus and PSVR.
如图1所示,本实施例包括如下步骤:As shown in Figure 1, this embodiment includes the following steps:
步骤一,创建空场景,创建带碰撞体的地面,并导入机器人模型,并获取机器人模型的高度,腰部高度,肩部宽度及高度信息,并与其余相关信息一并写入本发明的配置文件中。Step 1, create an empty scene, create a ground with a collision body, and import the robot model, and obtain the height, waist height, shoulder width and height information of the robot model, and write it into the configuration file of the present invention together with other relevant information middle.
定义步骤一中的其余相关信息指人体运动的敏感性标准,比如头部移动的速度达到多少时会被系统判定为开始行走等。The rest of the relevant information in the definition step 1 refers to the sensitivity standard of human body motion, such as the speed at which the head moves, which will be judged by the system as the start of walking, etc.
步骤二,用户佩戴虚拟现实设备,即头戴显示器和手持运动控制器,摆出T pose对机器人模型进行校正以适配各种身材的用户。Step 2: The user wears a virtual reality device, that is, a head-mounted display and a handheld motion controller, and poses in a T pose to correct the robot model to suit users of various sizes.
所述的虚拟现实控制器,采用但不限于用来获取手部交互以及手部的绝对位置和姿态的手柄装置,比如Oculus Touch,HTC Vive手部运动控制器,PS Move等设备。头戴显示器指虚拟现实设备中戴在头上用来为双眼呈现立体影像的显示器,比如Oculus Rift,HTCVive,PSVR等。头戴显示器也可以追踪头部的绝对位置和姿态。The virtual reality controller adopts but is not limited to a handle device used to obtain hand interaction and absolute position and attitude of the hand, such as Oculus Touch, HTC Vive hand motion controller, PS Move and other equipment. A head-mounted display refers to a display worn on the head in a virtual reality device to present stereoscopic images for both eyes, such as Oculus Rift, HTCVive, PSVR, etc. Head-mounted displays can also track the absolute position and pose of the head.
步骤三,标定后,开始进入重建算法,对输入的各项传感器数据进行记录,缓存50帧内的各项传感器数据,并实时统计出一些对后续计算有帮助的量。如用户前5帧内的头部速度,头部运动方向,手部速度及手部运动方向等。Step 3: After calibration, start to enter the reconstruction algorithm, record the input sensor data, cache the sensor data within 50 frames, and count some quantities that are helpful for subsequent calculations in real time. For example, the user's head speed, head movement direction, hand speed and hand movement direction in the first 5 frames.
步骤四,对人体的正方向进行估计,具体步骤包括:Step 4, estimating the positive direction of the human body, the specific steps include:
4.1)如图3和图4所示,计算头部的平均正方向的计算公式为其中:指此帧之前所累积的头部平均正方向。hfp指的是头部正方向在世界坐标系中xz平面上的投影,n指所统计的是n帧的头部平均正方向。4.1) As shown in Figure 3 and Figure 4, calculate the average positive direction of the head The calculation formula is in: Refers to the average positive direction of the head accumulated before this frame. hf p refers to the projection of the positive direction of the head on the xz plane in the world coordinate system, and n refers to What is counted is the average positive direction of the head of n frames.
4.2)计算双手连线的垂直平分线平均方向的计算公式为其中:指此帧之前所累积的双手连线的垂直平分线正方向。lhp及rhp分别指左手与右手在本帧中的位置,(lhp-rhp)p指的是双手之间的连线在世界坐标系中xz平面上的投影,n指所统计的是n帧的平均正方向,up指的是世界坐标系中向上的方向,即方向(0,0,1)。4.2) Calculate the average direction of the vertical bisector of the line connecting the hands The calculation formula is in: Refers to the positive direction of the vertical bisector of the accumulated two-hand line before this frame. lhp and rhp respectively refer to the positions of the left hand and right hand in this frame, (lhp-rhp) p refers to the projection of the line between the two hands on the xz plane in the world coordinate system, and n refers to The statistics are the average positive direction of n frames, and up refers to the upward direction in the world coordinate system, that is, the direction (0, 0, 1).
4.3)如图4所示,计算角度across,决定了两手连线与身体正方向的夹角, 其中:hp代表本帧中,头部的位置,lhp及rhp分别表示本帧中左手及右手的位置。下标p代表对该向量求在世界坐标系中xz平面上的投影。4.3) As shown in Figure 4, the angle a cross is calculated to determine the angle between the line connecting the two hands and the positive direction of the body. Among them: hp represents the position of the head in this frame, lhp and rhp represent the positions of the left hand and the right hand in the frame respectively. The subscript p represents the projection of the vector on the xz plane in the world coordinate system.
4.4)如图5所示,计算角度alinear,决定了两手连线与头部所在垂直平面的夹角,其中:相关符号同上。4.4) As shown in Figure 5, the angle a linear is calculated to determine the angle between the line connecting the two hands and the vertical plane where the head is located. Among them: the relevant symbols are the same as above.
4.4.1)根据步骤三中的速度统计量,若在近5帧中,头部和手部均处于静止状态,则需要确认,across是否在85°到95°或者alinear是否在-5°到5°之间,当是,则本帧中身体的正方向 4.4.1) According to the speed statistics in step 3, if the head and hands are in a static state in the last 5 frames, you need to confirm whether a cross is at 85° to 95° or whether a linear is at -5 Between ° and 5°, when yes, the positive direction of the body in this frame
4.4.2)当在近几帧中,头部处于静止的状态,但双手处于非静止状态,当与之间的差别小于5°时,则否则, 4.4.2) When in recent frames, the head is in a static state, but the hands are in a non-stationary state, when and When the difference between is less than 5°, then otherwise,
4.4.3)当在近几帧中,头部处于非静止状态,但双手处于静止状态,则bfc=bfl 4.4.3) When in recent frames, the head is in a non-stationary state, but the hands are in a static state, then bf c = bf l
4.4.4)当在近几帧中,头部即双手均处于非静止的状态,则bfc=bfl 4.4.4) When the head, that is, both hands are in a non-stationary state in recent frames, then bf c =bf l
在以上步骤中,bfc为本帧中身体的正方向,bfl为上一帧中身体的正方向。In the above steps, bf c is the positive direction of the body in this frame, and bf l is the positive direction of the body in the previous frame.
4.5)最后,身体的实际正方向bfa需要做一个线性插值,来保证人体的正方向不会因为不稳定的输入信号而产生突变的情况,即bfa=bfl*(1-f)+bfc*f,其中:f是混合参数,经实验,该值取0.1时的混合结果看起来最为自然。4.5) Finally, the actual positive direction bf a of the body needs to do a linear interpolation to ensure that the positive direction of the human body will not change suddenly due to unstable input signals, that is, bf a = bf l *(1-f)+ bf c *f, where: f is a mixing parameter. According to experiments, the mixing result looks the most natural when the value is 0.1.
步骤五,对上半身躯干部位进行重建,即左肩位置lse,右肩位置rse以及腰部位置be。Step 5: Reconstruct the upper body torso, that is, the left shoulder position lse, the right shoulder position rse and the waist position be.
5.1)根据头部的位置计算得到腰部高度其中:hh为头部的高度,hm为需要映射到虚拟场景中的人物模型高度,muh指人物模型头部到腰部的距离,hmh为该人物模型可下蹲到的最小头部高度,以防止腰部的高度低于脚部。5.1) Calculate the waist height according to the position of the head Among them: h h is the height of the head, h m is the height of the character model that needs to be mapped to the virtual scene, m uh refers to the distance from the head of the character model to the waist, h mh is the minimum head that the character model can squat to height to prevent the waist from being lower than the feet.
5.2)在得到了腰部高度beh后,需要得到腰部在水平面上的位置,从而知道腰部的绝对位置be。由于只能通过头部位置来估算,又因为头部的位置是不会静止的,永远都会存在微小的漂移,所以当腰部水平位置完全服从头部水平位置来计算的话,就会导致人物始终都会飘动的问题,因此,采用插值的方式来减缓这种头部微小的移动,来稳定腰部的位置。5.2) After the waist height be h is obtained, the position of the waist on the horizontal plane needs to be obtained, so as to know the absolute position be of the waist. Since it can only be estimated by the position of the head, and because the position of the head will not be static, there will always be a slight drift, so when the horizontal position of the waist is completely calculated according to the horizontal position of the head, the character will always be The problem of fluttering, therefore, the interpolation method is used to slow down the small movement of the head to stabilize the position of the waist.
4.3)在已知腰部位置be后,lse和rse的计算公式如下:me=be+(hp-be)/factors;lse=me-offset;rse=me+offset;其中factors是肩部高度的调整参数,bf是在步骤三中计算得到的身体正方向,factorw是肩部宽度的调整参数。4.3) After the waist position be is known, the calculation formulas of lse and rse are as follows: me=be+(hp-be)/factor s ; lse=me-offset; rse=me+offset; where factor s is the adjustment parameter of the shoulder height, bf is the positive direction of the body calculated in step 3, and factor w is the adjustment parameter of the shoulder width.
步骤六,采用反向动力学算法,以左肩位置和右肩位置为根,左右手运动传感器的位置为末端节点,计算肘部位置。Step 6: Use the inverse dynamics algorithm to calculate the elbow position with the left and right shoulder positions as the roots and the left and right hand motion sensors as the end nodes.
步骤七,根据用户的运动方向及速度采用中常用的动画混合技术对8段预先准备的动画依据不同权重进行混合,最后将混合后的结果映射到重建模型的下半身上。Step 7: According to the direction and speed of the user's movement, the animation mixing technology commonly used in China is used to mix the 8 pre-prepared animations according to different weights, and finally map the mixed results to the lower body of the reconstructed model.
在以上步骤中,对于前行和后行两段动画来说,根据双手的摆动方向来决定是先迈左脚还是先迈右脚,当用户在走动的一刹那前摆左臂,后摆右臂,则先出右脚,否则,检测到前摆右臂,后摆左臂或未检测到合法的摆臂动作时,先出左脚。In the above steps, for the forward and backward two animations, it is decided according to the swing direction of the hands whether to step the left foot or the right foot first. When the user swings the left arm forward and the right arm backward at the moment of walking , then move the right foot out first, otherwise, when it is detected that the right arm is swinging forward, the left arm is swinging backward or no legal arm swing action is detected, the left foot is first moved out.
步骤八,对脚部位置进行合法性检测,判断动画混合的结果是否合法,如不合法则对其采用与手部相同的反向动力学方法进行计算,以此来避免穿透地面或者跳起时脚部被拉长的错误。Step 8: Check the legality of the foot position to determine whether the result of animation mixing is legal. If it is not legal, use the same inverse dynamics method as the hand to calculate it, so as to avoid penetrating the ground or jumping An error where the foot is elongated.
以上所述的步骤三到步骤八,由计算机不断进行循环操作来实时重建每一帧的人体动画。In steps 3 to 8 mentioned above, the computer continuously performs loop operations to reconstruct the human animation of each frame in real time.
本实施例的最终效果如图6所示,其中图6a为前行效果,图6b为右前行效果,图6c为左行效果,图6d为左后行效果,图6e为后退效果,图6f为下蹲效果,图6g为屈膝行走效果,图6h为跳跃效果The final effect of this embodiment is shown in Figure 6, wherein Figure 6a is the forward effect, Figure 6b is the right forward effect, Figure 6c is the left effect, Figure 6d is the left backward effect, Figure 6e is the backward effect, Figure 6f It is the effect of squatting, Figure 6g is the effect of walking with knees bent, and Figure 6h is the effect of jumping
本发明可以应用于虚拟现实领域中有重建人体需求的应用中,该应用应对于人体重建没有过重的精确性需求,如VR虚拟现实社交应用,多人应用等。The present invention can be applied to applications that require human body reconstruction in the field of virtual reality, and the application should not require excessive accuracy for human body reconstruction, such as VR virtual reality social applications, multi-person applications, and the like.
上述具体实施可由本领域技术人员在不背离本发明原理和宗旨的前提下以不同的方式对其进行局部调整,本发明的保护范围以权利要求书为准且不由上述具体实施所限,在其范围内的各个实现方案均受本发明之约束。The above specific implementation can be partially adjusted in different ways by those skilled in the art without departing from the principle and purpose of the present invention. The scope of protection of the present invention is subject to the claims and is not limited by the above specific implementation. Each implementation within the scope is bound by the invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611150372.8A CN108230429A (en) | 2016-12-14 | 2016-12-14 | Real-time whole body posture reconstruction method based on head and two-hand positions and posture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611150372.8A CN108230429A (en) | 2016-12-14 | 2016-12-14 | Real-time whole body posture reconstruction method based on head and two-hand positions and posture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108230429A true CN108230429A (en) | 2018-06-29 |
Family
ID=62638297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611150372.8A Pending CN108230429A (en) | 2016-12-14 | 2016-12-14 | Real-time whole body posture reconstruction method based on head and two-hand positions and posture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108230429A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108874147A (en) * | 2018-07-12 | 2018-11-23 | 深圳沃利创意工程有限公司 | A kind of VR-BOX experience interactive device |
CN109683706A (en) * | 2018-12-10 | 2019-04-26 | 中车青岛四方机车车辆股份有限公司 | A kind of method and system of the more people's interactions of virtual reality |
CN110349180A (en) * | 2019-07-17 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Human body joint point prediction method and device and motion type identification method and device |
CN112076473A (en) * | 2020-09-11 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Control method and device of virtual prop, electronic equipment and storage medium |
CN112527109A (en) * | 2020-12-04 | 2021-03-19 | 上海交通大学 | VR whole body action control method and system based on sitting posture and computer readable medium |
CN112614214A (en) * | 2020-12-18 | 2021-04-06 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic device and storage medium |
CN112791381A (en) * | 2021-01-21 | 2021-05-14 | 深圳市瑞立视多媒体科技有限公司 | Method and device for moving waistband following player in virtual reality and computer equipment |
CN112842327A (en) * | 2021-01-05 | 2021-05-28 | 北京诺亦腾科技有限公司 | Body posture generation method and device, electronic equipment and medium |
CN115629670A (en) * | 2022-12-01 | 2023-01-20 | 北京格如灵科技有限公司 | Method, device, equipment and medium for displaying hand gesture in virtual reality environment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102317977A (en) * | 2009-02-17 | 2012-01-11 | 奥美可互动有限责任公司 | Method and system for gesture recognition |
CN103679712A (en) * | 2013-11-29 | 2014-03-26 | 马婷 | Human body posture estimation method and human body posture estimation system |
CN104117206A (en) * | 2014-08-01 | 2014-10-29 | 天津恒威先创科技发展有限公司 | Method for realizing virtual reality all-directional action based on action capturing system |
CN105868506A (en) * | 2016-04-25 | 2016-08-17 | 钱竞光 | Trampolining take-off action simulation and analysis method based on human body dynamics and net surface finite element coupling simulation |
-
2016
- 2016-12-14 CN CN201611150372.8A patent/CN108230429A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102317977A (en) * | 2009-02-17 | 2012-01-11 | 奥美可互动有限责任公司 | Method and system for gesture recognition |
CN103679712A (en) * | 2013-11-29 | 2014-03-26 | 马婷 | Human body posture estimation method and human body posture estimation system |
CN104117206A (en) * | 2014-08-01 | 2014-10-29 | 天津恒威先创科技发展有限公司 | Method for realizing virtual reality all-directional action based on action capturing system |
CN105868506A (en) * | 2016-04-25 | 2016-08-17 | 钱竞光 | Trampolining take-off action simulation and analysis method based on human body dynamics and net surface finite element coupling simulation |
Non-Patent Citations (1)
Title |
---|
FAN JIANG ET AL: ""Real-time Full-body Motion Reconstruction and Recognition for Off-the-Shelf VR"", 《THE 15TH ACM SIGGPAPH CONFERENCE》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108874147A (en) * | 2018-07-12 | 2018-11-23 | 深圳沃利创意工程有限公司 | A kind of VR-BOX experience interactive device |
CN109683706A (en) * | 2018-12-10 | 2019-04-26 | 中车青岛四方机车车辆股份有限公司 | A kind of method and system of the more people's interactions of virtual reality |
CN110349180A (en) * | 2019-07-17 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Human body joint point prediction method and device and motion type identification method and device |
CN110349180B (en) * | 2019-07-17 | 2022-04-08 | 达闼机器人有限公司 | Human body joint point prediction method and device and motion type identification method and device |
CN112076473A (en) * | 2020-09-11 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Control method and device of virtual prop, electronic equipment and storage medium |
CN112527109A (en) * | 2020-12-04 | 2021-03-19 | 上海交通大学 | VR whole body action control method and system based on sitting posture and computer readable medium |
CN112614214A (en) * | 2020-12-18 | 2021-04-06 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic device and storage medium |
CN112614214B (en) * | 2020-12-18 | 2023-10-27 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic equipment and storage medium |
CN112842327A (en) * | 2021-01-05 | 2021-05-28 | 北京诺亦腾科技有限公司 | Body posture generation method and device, electronic equipment and medium |
CN112791381A (en) * | 2021-01-21 | 2021-05-14 | 深圳市瑞立视多媒体科技有限公司 | Method and device for moving waistband following player in virtual reality and computer equipment |
CN115629670A (en) * | 2022-12-01 | 2023-01-20 | 北京格如灵科技有限公司 | Method, device, equipment and medium for displaying hand gesture in virtual reality environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108230429A (en) | Real-time whole body posture reconstruction method based on head and two-hand positions and posture | |
CN104898669B (en) | A kind of method and system carrying out virtual reality travelling control based on inertial sensor | |
KR101483713B1 (en) | Apparatus and Method for capturing a motion of human | |
CN102323854B (en) | Human motion capture device | |
JP6938542B2 (en) | Methods and program products for articulated tracking that combine embedded and external sensors | |
KR20210011985A (en) | Image processing method and apparatus, image device, and storage medium | |
US20090046056A1 (en) | Human motion tracking device | |
Caserman et al. | Analysis of inverse kinematics solutions for full-body reconstruction in virtual reality | |
CN101579238A (en) | Human motion capture three dimensional playback system and method thereof | |
CN104353240A (en) | Running machine system based on Kinect | |
CN203673431U (en) | Motion trail virtual device | |
TW202026846A (en) | Action capture method for presenting an image similar to the motion of a user and displaying the image on a display module | |
US11112857B2 (en) | Information processing apparatus, information processing method, and program | |
CN114405004B (en) | A VR game intelligent management system based on big data feature recognition | |
CN113268141B (en) | A motion capture method and device based on inertial sensors and fabric electronics | |
CN109781104B (en) | Motion attitude determination and positioning method and device, computer equipment and medium | |
US20180216959A1 (en) | A Combined Motion Capture System | |
CN110609621A (en) | Posture calibration method and human motion capture system based on micro-sensor | |
TW202031321A (en) | Swimming posture correction method and swimming posture correction system | |
CN115560750A (en) | Method, device, equipment and storage medium for determining human body posture | |
Xiang et al. | Comparing real-time human motion capture system using inertial sensors with microsoft kinect | |
WO2020174586A1 (en) | Information processing device, information processing method, and program | |
TW202117588A (en) | Human body portion tracking method and human body portion tracking system | |
KR101527792B1 (en) | Method and apparatus for modeling interactive character | |
de la Rubia et al. | Natural locomotion based on foot-mounted inertial sensors in a wireless virtual reality system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |