[go: up one dir, main page]

CN117549306A - Fall position identification method and accompanying robot - Google Patents

Fall position identification method and accompanying robot Download PDF

Info

Publication number
CN117549306A
CN117549306A CN202311706801.5A CN202311706801A CN117549306A CN 117549306 A CN117549306 A CN 117549306A CN 202311706801 A CN202311706801 A CN 202311706801A CN 117549306 A CN117549306 A CN 117549306A
Authority
CN
China
Prior art keywords
time
axis
data
sequence
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311706801.5A
Other languages
Chinese (zh)
Inventor
王豪
徐雪
周炜翔
吴清明
但愿
刘亮亮
陈琪
孙淑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Technology WHUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Technology WHUST filed Critical Wuhan University of Science and Technology WHUST
Priority to CN202311706801.5A priority Critical patent/CN117549306A/en
Publication of CN117549306A publication Critical patent/CN117549306A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a falling position identification method and a companion robot, which can quickly identify falling behaviors of VR experimenters by early warning the falling behaviors through human body posture angles and angular velocities acquired by VR head-mounted equipment, then predict human head track based on an LSTM coding and decoding model, and have high model prediction accuracy, and can be used for path planning of the companion robot.

Description

一种跌倒位置识别方法及陪伴机器人A fall position recognition method and companion robot

技术领域Technical field

本发明涉及摔倒姿态识别领域,尤其是一种跌倒位置识别方法及陪伴机器人。The invention relates to the field of fall posture recognition, in particular to a fall position recognition method and a companion robot.

背景技术Background technique

VR体验者在日常生活中容易因各种原因发生跌倒,如:在使用VR设备时遇到地面不平整或障碍物,而虚拟环境中未显示出障碍物和实际地面的平整程度,这导致发生跌倒;或者由于VR设备的质量或稳定性不佳,可能会导致体验者在移动或转身时失去平衡并跌倒;或者由于VR体验中,体验者可能需要移动或做出快速的动作,如果体验者身体平衡能力较差或对虚拟环境的感知能力不足,可能会导致跌倒。VR experiencers are prone to falls due to various reasons in daily life, such as encountering uneven ground or obstacles when using VR equipment, and the virtual environment does not show the smoothness of the obstacles and the actual ground, which leads to Falling; or due to poor quality or stability of the VR equipment, which may cause the experiencer to lose balance and fall when moving or turning; or due to the VR experience, the experiencer may need to move or make quick movements, if the experiencer Poor physical balance or insufficient perception of the virtual environment may lead to falls.

若无人及时发现并给予帮助,可能会造成严重后果,曾有报导一位44岁的莫斯科市居民在使用VR头戴式设备时,不幸摔倒在了一张玻璃桌上,最终因划伤失血过多而死。现有的监控系统或智能设备虽能一定程度上识别跌倒情况,但无法准确判断跌倒者的状态,且在跌倒发生后无法对人和vr机器进行保护。因此,开发一种能够准确识别跌倒并自动接人进行救助的系统具有重要意义。If no one discovers and helps in time, it may cause serious consequences. It was reported that a 44-year-old Moscow resident unfortunately fell on a glass table while using a VR headset and eventually suffered scratches. Bleed to death. Although existing monitoring systems or smart devices can identify falls to a certain extent, they cannot accurately determine the state of the person who fell, and cannot protect people and VR machines after a fall. Therefore, it is of great significance to develop a system that can accurately identify falls and automatically pick up people for rescue.

发明内容Contents of the invention

本发明的目的在于:本发明提供了一种跌倒位置识别方法及陪伴机器人,能够正确、快速识别出VR体验者跌倒行为和头部的落地位置,通过陪伴机器人前往头部的落地位置进行保护。The purpose of the present invention is to provide a fall position recognition method and a companion robot, which can correctly and quickly identify the VR experiencer's falling behavior and the landing position of the head, and protect the VR experiencer by traveling to the landing position of the head.

一种跌倒位置识别方法,包括以下步骤:A fall location identification method includes the following steps:

S1.训练LSTM神经网络;S1. Train LSTM neural network;

数据收集:收集vr头戴式设备的位置坐标数据的时间序列,时间序列中每个时间步的位置坐标数据包括vr头戴式设备在导航坐标系中的坐标(xi,yi,zi),i为时间步的编号,x为正东方向的坐标,y为正北方向的坐标,z为垂直于水平面方向的坐标;Data collection: Collect a time series of position coordinate data of the VR head-mounted device. The position coordinate data of each time step in the time series includes the coordinates (xi, yi, zi), i of the VR head-mounted device in the navigation coordinate system. is the number of the time step, x is the coordinate in the east direction, y is the coordinate in the north direction, and z is the coordinate perpendicular to the horizontal plane;

数据准备:对时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值分别进行标准化处理,使标准化处理后的时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值的均值为0,方差为1;Data preparation: Standardize the values corresponding to the three coordinate axes of the navigation coordinate system for the position coordinate data of each time step in the time series, so that the position coordinate data of each time step in the standardized time series can be used in the navigation system. The mean value of the values corresponding to the three coordinate axes of the coordinate system is 0 and the variance is 1;

划分数据集:将标准化处理后的时间序列分成输入序列和目标序列;Divide the data set: Divide the standardized time series into input sequence and target sequence;

LSTM神经网络训练:初始化LSTM模型参数,将输入序列通过LSTM模型进行前向传播,得到预测序列,根据预测序列和目标序列计算损失函数,通过反向传播算法,根据计算出的损失函数梯度更新LSTM模型的权重参数,重复训练过程,直到达到预设的训练次数或直至收敛;LSTM neural network training: Initialize the LSTM model parameters, perform forward propagation of the input sequence through the LSTM model to obtain the prediction sequence, calculate the loss function based on the prediction sequence and the target sequence, and update the LSTM according to the gradient of the calculated loss function through the back propagation algorithm. The weight parameters of the model, repeat the training process until the preset training times are reached or until convergence;

S2.以地面作为参考,在VR体验空间中建立导航坐标系,采集人体佩戴VR头戴式设备时VR头戴式设备的实时数据,包括姿态角、加速度以及在导航坐标系中的位置坐标数据;S2. Using the ground as a reference, establish a navigation coordinate system in the VR experience space, and collect real-time data of the VR head-mounted device when the human body wears the VR head-mounted device, including attitude angle, acceleration and position coordinate data in the navigation coordinate system. ;

S3.将姿态角、加速度输入跌倒预判模型,提取出触发跌倒条件的实时数据;S3. Input attitude angle and acceleration into the fall prediction model and extract real-time data that triggers fall conditions;

S4.采用滑动窗口法,从步骤S3提取的实时数据中提取滑动窗口范围内的时间步对应的位置坐标数据,组成的样本时间序列,所述滑动窗口范围内的时间步包含S3提取出的实时数据的时间步;S4. Use the sliding window method to extract the position coordinate data corresponding to the time steps within the sliding window range from the real-time data extracted in step S3, forming a sample time series. The time steps within the sliding window range include the real-time data extracted in S3. The time step of the data;

S5.将S4样本时间序列进行标准化处理后输入训练好的LSTM神经网络,输出结果进行逆标准化处理得出VR头戴式设备的位置坐标数据的预测结果。S5. Standardize the S4 sample time series and input it into the trained LSTM neural network. The output result is denormalized to obtain the prediction result of the position coordinate data of the VR head-mounted device.

所述标准化处理的步骤具体包括:The steps of the standardization process specifically include:

分别求出所有时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值的均值和方差;Calculate the mean and variance of the values corresponding to the three coordinate axes of the navigation coordinate system for the position coordinate data of all time steps;

利用下式,将时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值转化为标准化数值:Use the following formula to convert the position coordinate data of each time step in the time series into standardized values from the values corresponding to the three coordinate axes of the navigation coordinate system:

标准化数值=(原始值-均值)/方差;Standardized value = (original value - mean) / variance;

原始值为各时间步的位置坐标数据在导航坐标系中选中的一个坐标轴对应的数值,均值为所有时间步的位置坐标数据在导航坐标系的选中的一个坐标轴对应的数值的均值,方差为所有时间步的位置坐标数据在导航坐标系的选中的一个坐标轴对应的数值的方差。The original value is the value corresponding to the selected coordinate axis in the navigation coordinate system of the position coordinate data of each time step. The mean value is the mean value and the variance of the value corresponding to the selected coordinate axis of the position coordinate data of all time steps in the navigation coordinate system. It is the variance of the value corresponding to the selected coordinate axis of the navigation coordinate system for the position coordinate data of all time steps.

优选的,划分数据集时,使用滑动窗口法,将一个时间窗口内的标准化处理后时间序列作为输入序列,将输入序列的下一个时间步的时间序列作为目标序列。Preferably, when dividing the data set, the sliding window method is used, the standardized time series within a time window is used as the input sequence, and the time series of the next time step of the input sequence is used as the target sequence.

进一步的,划分数据集时,使用滑动窗口法,将一个时间窗口内的标准化处理后时间序列划分为三个连续时间步的时间序列,前两个时间步的时间序列作为输入序列,最后一个时间步的时间序列作为目标序列。Furthermore, when dividing the data set, the sliding window method is used to divide the standardized time series within a time window into time series of three consecutive time steps. The time series of the first two time steps are used as the input sequence, and the last time series is used as the input sequence. The time series of steps is used as the target sequence.

步骤S2中姿态角包括Pitch角,Roll角,Yaw角,yaw角表示与Y轴的偏移角;pitch角表示与X轴的偏移角;roll角表示与Z轴的偏转角,X轴是垂直人体左右侧面的轴,Z轴是垂直人体前后侧面的轴,Y轴垂直于X轴和Z轴,加速度为沿X、Y、Z轴的加速度AccX、AccY、AccZ。In step S2, the attitude angle includes pitch angle, roll angle, and yaw angle. The yaw angle represents the offset angle from the Y-axis; the pitch angle represents the offset angle from the X-axis; the roll angle represents the deflection angle from the Z-axis, and the X-axis is The axis is perpendicular to the left and right sides of the human body, the Z axis is the axis perpendicular to the front and rear sides of the human body, the Y axis is perpendicular to the X axis and Z axis, and the acceleration is the acceleration AccX, AccY, and AccZ along the X, Y, and Z axes.

步骤S3中的跌倒预判模型包括姿态角区分单元和加速度区分单元,所述跌倒预判模型执行以下步骤判断是否触发跌倒条件:The fall prediction model in step S3 includes an attitude angle distinction unit and an acceleration distinction unit. The fall prediction model performs the following steps to determine whether a fall condition is triggered:

所述姿态角输入姿态角区分单元后,利用公式(1)可以用来将人体的前向弯腰与后向跌倒从其他姿态中区分出来,得到sqrtRP;After the posture angle is input into the posture angle distinction unit, formula (1) can be used to distinguish the forward bending and backward falling of the human body from other postures, and obtain sqrtRP;

sqrtRP按照公式(2)计算出RPP_delta,并与设置的RPP_delta阈值比较,可以有效的提取出人体侧向和后向的弯腰与跌倒动作。sqrtRP calculates RPP_delta according to formula (2), and compares it with the set RPP_delta threshold, which can effectively extract the sideways and backward bending and falling movements of the human body.

RPP_delta=fabs(SqrtRP-yaw) (2)RPP_delta=fabs(SqrtRP-yaw) (2)

当RPP_delta角度大于30°,时将加速度输入加速度区分单元;When the RPP_delta angle is greater than 30°, the acceleration is input into the acceleration differentiation unit;

当yaw角度下降到不大于70°时,加速度区分单元首先根据Z轴方向的加速度和Y轴方向的加速度,按照公式(3)计算Z轴与Y轴的合成加速度:SqrtAccYZ,然后依据SqrtAccYZ、Z轴方向的加速度和公式(4)计算ACCYZZ_delta,然后与正常阈值和跌倒阈值比较,判断人体是否发生跌倒动作;When the yaw angle drops to no more than 70°, the acceleration differentiation unit first calculates the combined acceleration of the Z-axis and the Y-axis according to formula (3) based on the acceleration in the Z-axis direction and the Y-axis direction: SqrtAccYZ, and then based on SqrtAccYZ, Z The acceleration in the axial direction and formula (4) are used to calculate ACCYZZ_delta, and then compared with the normal threshold and fall threshold to determine whether the human body has fallen;

AccYZZ_delta=fabs(SqrtAccYZ-AccZ) (4)AccYZZ_delta=fabs(SqrtAccYZ-AccZ) (4)

观察ACCYZZ_delta角度变化情况,当ACCYZZ_delta小于40时判断为跌倒,触发跌倒条件。Observe the change of ACCYZZ_delta angle. When ACCYZZ_delta is less than 40, it is judged as a fall and the fall condition is triggered.

一种陪伴机器人前往所述步骤S5预测的位置坐标处,进行保护,陪伴机器人携带有记忆海绵垫,用于吸收冲击和能量,减轻跌倒时头部的冲击力。A kind of companion robot goes to the position coordinates predicted in step S5 for protection. The companion robot carries a memory foam pad to absorb impact and energy and reduce the impact of the head when falling.

本发明提供了一种基于LSTM编解码模型的人体头部轨迹预测方法,模型预测准确度高,能用于陪伴机器人的路径规划,本发明还提供了跌倒行为的预警方法,能快速识别出VR体验者的跌倒行为。The present invention provides a human head trajectory prediction method based on the LSTM encoding and decoding model. The model prediction accuracy is high and can be used for path planning of accompanying robots. The present invention also provides an early warning method for falling behavior, which can quickly identify VR The experiencer’s falling behavior.

附图说明Description of the drawings

图1为LSTM神经网络的示意图;Figure 1 is a schematic diagram of the LSTM neural network;

图2为人体姿态角的示意图;Figure 2 is a schematic diagram of the posture angle of the human body;

图3为跌倒预判模型的跌倒判断流程示意图;Figure 3 is a schematic diagram of the fall judgment process of the fall prediction model;

图4为陪伴机器人路径规划的流程图。Figure 4 is a flow chart of companion robot path planning.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明,即所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和表示本发明实施例的组件可以以各种不同的配置来布置和设计。In order to make the purpose, technical solutions and advantages of the present invention more clear, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention and are not used to limit the present invention. That is, the described embodiments are only some embodiments of the present invention, rather than all embodiments. The components of the embodiments of the invention generally described and represented in the figures herein may be arranged and designed in a variety of different configurations.

因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。Therefore, the following detailed description of the embodiments of the invention provided in the appended drawings is not intended to limit the scope of the claimed invention, but rather to represent selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without any creative work fall within the scope of protection of the present invention.

需要说明的是,术语“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that relational terms such as "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations are mutually exclusive. any such actual relationship or sequence exists between them. Furthermore, the terms "comprises," "comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of additional identical elements in a process, method, article, or apparatus that includes the stated element.

以下结合实施例对本发明的特征和性能作进一步的详细描述。The features and performance of the present invention will be described in further detail below with reference to examples.

实施例1Example 1

本申请提出一种跌倒位置识别方法,细节如下:This application proposes a fall location identification method, the details are as follows:

S1.训练LSTM神经网络S1. Training LSTM neural network

数据收集:收集vr头戴式设备的位置坐标数据的时间序列,时间序列中每个时间步的位置坐标数据包括vr头戴式设备在导航坐标系中的坐标(xi,yi,zi),i为时间步的编号,x为正东方向的坐标,y为正北方向的坐标,z为垂直于水平面方向的坐标;Data collection: Collect a time series of position coordinate data of the VR head-mounted device. The position coordinate data of each time step in the time series includes the coordinates (xi, yi, zi), i of the VR head-mounted device in the navigation coordinate system. is the number of the time step, x is the coordinate in the east direction, y is the coordinate in the north direction, and z is the coordinate perpendicular to the horizontal plane;

数据准备:对时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值分别进行标准化处理,使标准化处理后的时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值的均值为0,方差为1;Data preparation: Standardize the values corresponding to the three coordinate axes of the navigation coordinate system for the position coordinate data of each time step in the time series, so that the position coordinate data of each time step in the standardized time series can be used in the navigation system. The mean value of the values corresponding to the three coordinate axes of the coordinate system is 0 and the variance is 1;

划分数据集:将标准化处理后的时间序列分成输入序列和目标序列;Divide the data set: Divide the standardized time series into input sequence and target sequence;

LSTM神经网络训练:初始化LSTM模型参数,将输入序列通过LSTM模型进行前向传播,得到预测序列,根据预测序列和目标序列计算损失函数,通过反向传播算法,根据计算出的损失函数梯度更新LSTM模型的权重参数,重复训练过程,直到达到预设的训练次数或直至收敛。LSTM neural network training: Initialize the LSTM model parameters, perform forward propagation of the input sequence through the LSTM model to obtain the prediction sequence, calculate the loss function based on the prediction sequence and the target sequence, and update the LSTM according to the gradient of the calculated loss function through the back propagation algorithm. weight parameters of the model, and the training process is repeated until the preset training times are reached or until convergence.

S2.以地面作为参考,在VR体验空间中建立导航坐标系,采集人体佩戴VR头戴式设备时VR头戴式设备的实时数据,包括姿态角、加速度以及在导航坐标系中的位置坐标数据;S2. Using the ground as a reference, establish a navigation coordinate system in the VR experience space, and collect real-time data of the VR head-mounted device when the human body wears the VR head-mounted device, including attitude angle, acceleration and position coordinate data in the navigation coordinate system. ;

S3.将姿态角、加速度输入跌倒预判模型,提取出触发跌倒条件的实时数据;S3. Input attitude angle and acceleration into the fall prediction model and extract real-time data that triggers fall conditions;

S4.采用滑动窗口法,从步骤S3提取的实时数据中提取滑动窗口范围内的时间步对应的位置坐标数据,组成的样本时间序列,所述滑动窗口范围内的时间步包含S3提取出的实时数据的时间步;S4. Use the sliding window method to extract the position coordinate data corresponding to the time steps within the sliding window range from the real-time data extracted in step S3, forming a sample time series. The time steps within the sliding window range include the real-time data extracted in S3. The time step of the data;

S5.将S4样本时间序列进行标准化处理后输入训练好的LSTM神经网络,输出结果进行逆标准化处理得出VR头戴式设备的位置坐标数据的预测结果;S5. Standardize the S4 sample time series and input it into the trained LSTM neural network, and perform inverse normalization on the output results to obtain the prediction results of the position coordinate data of the VR head-mounted device;

具体的,vr头戴式设备的位置坐标数据是vr头戴式设备上安装雷达传感器获得的,雷达传感器中的天线会发射电磁波,发射的电磁波在空间中传播,遇到物体时会发生反射、折射或散射。雷达传感器中的接收器会接收到由目标反射回来的信号,接收到的信号会经过放大、滤波和调制等处理步骤,以便提取目标的相关信息。通过测量发射和接收信号之间的时间差,雷达传感器可以计算目标的距离。Specifically, the position coordinate data of the VR head-mounted device is obtained by installing a radar sensor on the VR head-mounted device. The antenna in the radar sensor emits electromagnetic waves. The emitted electromagnetic waves propagate in space and will be reflected when encountering objects. Refraction or scattering. The receiver in the radar sensor receives the signal reflected by the target, and the received signal undergoes processing steps such as amplification, filtering, and modulation to extract relevant information about the target. By measuring the time difference between transmitting and receiving signals, radar sensors can calculate the distance to a target.

通常,雷达传感器会使用一个以本地地面作为参考的导航坐标系,而VR体验空间中的目标则相对于该坐标系已定义位置。因此,可以得到VR体验空间中vr头戴式设备的位置坐标:(xi,yi,zi)。Typically, radar sensors use a navigation coordinate system referenced to the local ground, and objects in the VR experience space have defined positions relative to this coordinate system. Therefore, the position coordinates of the VR head-mounted device in the VR experience space can be obtained: (xi, yi, zi).

优选的,标准化处理的步骤具体包括:Preferably, the steps of standardization include:

分别求出所有时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值的均值和方差;Calculate the mean and variance of the values corresponding to the three coordinate axes of the navigation coordinate system for the position coordinate data of all time steps;

利用下式,将时间序列中每个时间步的位置坐标数据在导航坐标系的三个坐标轴对应的数值转化为标准化数值:Use the following formula to convert the position coordinate data of each time step in the time series into standardized values from the values corresponding to the three coordinate axes of the navigation coordinate system:

标准化数值=(原始值-均值)/方差;Standardized value = (original value - mean) / variance;

原始值为各时间步的位置坐标数据在导航坐标系中选中的一个坐标轴对应的数值,均值为所有时间步的位置坐标数据在导航坐标系的选中的一个坐标轴对应的数值的均值,方差为所有时间步的位置坐标数据在导航坐标系的选中的一个坐标轴对应的数值的方差。The original value is the value corresponding to the selected coordinate axis in the navigation coordinate system of the position coordinate data of each time step. The mean value is the mean value and the variance of the value corresponding to the selected coordinate axis of the position coordinate data of all time steps in the navigation coordinate system. It is the variance of the value corresponding to the selected coordinate axis of the navigation coordinate system for the position coordinate data of all time steps.

通过以上标准化处理,使标准化数值的均值为0,方差为1。Through the above standardization process, the mean value of the standardized value is 0 and the variance is 1.

逆标准化处理,用于根据得到的标准化预测结果计算真实值。Inverse normalization processing is used to calculate the true value based on the obtained standardized prediction results.

真实值=标准化预测结果*方差+均值True value = standardized prediction result * variance + mean

本实施例的步骤S1中,输入序列是用于模型输入的标准化处理后的历史时间序列,目标序列是要预测的标准化处理后的下一个时间步的时间序列。In step S1 of this embodiment, the input sequence is a standardized historical time series used for model input, and the target sequence is a standardized time series to be predicted at the next time step.

具体的,使用滑动窗口法,将一个时间窗口内的标准化处理后时间序列作为输入序列,将输入序列的下一个时间步的时间序列作为目标序列。Specifically, using the sliding window method, the standardized time series within a time window is used as the input sequence, and the time series of the next time step of the input sequence is used as the target sequence.

进一步的,使用滑动窗口法时,将一个时间窗口内的标准化处理后时间序列划分为三个连续时间步的时间序列,前两个时间步的时间序列作为输入序列,最后一个时间步的时间序列作为目标序列。Furthermore, when using the sliding window method, the standardized time series within a time window is divided into time series of three consecutive time steps. The time series of the first two time steps are used as the input sequence, and the time series of the last time step is used as the input sequence. as the target sequence.

假设标准化处理后时间序列如下:Assume that the time series after normalization is as follows:

t1:10,20,30,40;t1:10,20,30,40;

t2:15,25,35,45;t2:15,25,35,45;

t3:20,30,40,50;t3:20,30,40,50;

t4:25,35,45,55;t4:25,35,45,55;

使用滑动窗口方法,将这些数据转换成输入序列和目标序列:Use the sliding window method to convert these data into input and target sequences:

假设窗口大小为2,即每个输入序列样本包含2个时间步的数据:Assume that the window size is 2, that is, each input sequence sample contains data for 2 time steps:

输入:[10,20,30,40],[15,25,35,45],输出:[20,30,40,50]Input:[10,20,30,40],[15,25,35,45], output:[20,30,40,50]

输入:[15,25,35,45],[20,30,40,50],输出:[25,35,45,55]Input:[15,25,35,45],[20,30,40,50], output:[25,35,45,55]

滑动窗口方法可以将连续的vr头戴式设备的位置数据划分成输入窗口和对应的输出。The sliding window method can divide the continuous VR head-mounted device position data into input windows and corresponding outputs.

步骤S1中LSTM神经网络是一种特殊的rnn(递归神经网络),如图1所述,LSTM网络结构将t时刻的vr头戴式设备的位置坐标按照时间顺序输入到网络,在LSTM单元中需要根据记忆单元(cell)的状态结合输入计算当前状态的输出并且更新记忆单元的信息传递给下一个隐层单元,最后通过输出t+1时刻的位置坐标信息。In step S1, the LSTM neural network is a special RNN (recurrent neural network). As shown in Figure 1, the LSTM network structure inputs the position coordinates of the VR head-mounted device at time t into the network in chronological order. In the LSTM unit It is necessary to calculate the output of the current state based on the state of the memory unit (cell) combined with the input and update the memory unit information and pass it to the next hidden layer unit, and finally output the position coordinate information at time t+1.

在对vr头戴式设备轨迹预测模型的训练中,需要根据前一段vr头戴式设备轨迹数据来对其未来较长的轨迹进行预测,这需要将LSTM网络结构不断循环,将t+1时刻的输出再作为t+2时刻的输入,如此循环即可完成对长期vr头戴式设备轨迹信息的预测。采用多种步长的vr头戴式设备位置坐标作为网络输入进行实验。在网络训练的过程中,网络根据已知的vr头戴式设备的轨迹信息,对输出的预测值进行评估优化,计算预测值与实际值之间平均的三维空间距离作为训练目标,选用预测值与实际值的均方误差作为损失函数,采用反向传播算法计根据计算出的损失函数梯度更新LSTM模型的权重参,重复训练过程,直到达到预设的训练次数或直至收敛。In the training of the VR head-mounted device trajectory prediction model, it is necessary to predict the longer future trajectory of the VR head-mounted device based on the previous period of trajectory data. This requires the LSTM network structure to be continuously cycled, and the t+1 time The output is then used as the input at time t+2. In this loop, the prediction of long-term VR head-mounted device trajectory information can be completed. Experiments were conducted using the position coordinates of the VR head-mounted device with various step sizes as network input. During the network training process, the network evaluates and optimizes the output predicted value based on the known trajectory information of the VR head-mounted device, calculates the average three-dimensional space distance between the predicted value and the actual value as the training target, and selects the predicted value The mean square error with the actual value is used as the loss function, and the back propagation algorithm is used to update the weight parameters of the LSTM model based on the calculated loss function gradient. The training process is repeated until the preset training times are reached or until convergence.

在本实施例的步骤S2中,vr头戴式设备检测人体姿态和重力加速度的传感器选择整合了3轴陀螺仪和3轴加速度的InvenSense公司MPU6050芯片。为更好的观测加速度变化情况,对三个方向加速度分别进行分析,建立X,Y,Z三相坐标轴;为了方便姿态角之间的分析,采用MPU6050芯片对人体数据进行采集,其自身带有三方向坐标系,并能测量姿态角,姿态角分别为Pitch角,Roll角,Yaw角。如图2所示,yaw角表示与Y轴的偏移角;pitch角表示与X轴的偏移角;roll角表示与Z轴的偏转角。X轴是垂直人体左右侧面的轴,Z轴是垂直人体前后侧面的轴,Y轴垂直于X轴和Z轴。通过将MPU6050芯片安装在vr头戴式设备上并建立此姿态坐标系。加速度为沿X、Y、Z轴的加速度AccX、AccY、AccZ。In step S2 of this embodiment, the sensor used by the VR head-mounted device to detect human posture and gravity acceleration selects the InvenSense MPU6050 chip that integrates a 3-axis gyroscope and 3-axis acceleration. In order to better observe the acceleration changes, the acceleration in the three directions is analyzed separately, and the X, Y, Z three-phase coordinate axes are established; in order to facilitate the analysis between attitude angles, the MPU6050 chip is used to collect human body data, which has It has a three-directional coordinate system and can measure attitude angles, which are Pitch angle, Roll angle and Yaw angle. As shown in Figure 2, the yaw angle represents the offset angle from the Y-axis; the pitch angle represents the offset angle from the X-axis; and the roll angle represents the deflection angle from the Z-axis. The X-axis is perpendicular to the left and right sides of the human body, the Z-axis is perpendicular to the front and rear sides of the human body, and the Y-axis is perpendicular to the X-axis and Z-axis. By installing the MPU6050 chip on the VR head-mounted device and establishing this posture coordinate system. Acceleration is the acceleration AccX, AccY, AccZ along the X, Y, and Z axes.

步骤S3中的跌倒预判模型包括姿态角区分单元和加速度区分单元。跌倒预判模型执行图3所示的判断流程。The fall prediction model in step S3 includes an attitude angle distinguishing unit and an acceleration distinguishing unit. The fall prediction model executes the judgment process shown in Figure 3.

所述姿态角输入姿态角区分单元后,利用公式(1)可以用来将人体的前向弯腰与后向跌倒从其他姿态中区分出来,得到sqrtRP;After the posture angle is input into the posture angle distinction unit, formula (1) can be used to distinguish the forward bending and backward falling of the human body from other postures, and obtain sqrtRP;

sqrtRP按照公式(2)计算出RPP_delta,并与设置的RPP_delta阈值比较,可以有效的提取出人体侧向和后向的弯腰与跌倒动作。sqrtRP calculates RPP_delta according to formula (2), and compares it with the set RPP_delta threshold, which can effectively extract the sideways and backward bending and falling movements of the human body.

RPP_delta=fabs(SqrtRP-yaw) (2)RPP_delta=fabs(SqrtRP-yaw) (2)

在人体正常站立状况下,RPP_delta角度不超过30°,当出现弯腰或者跌倒动作时,RPP_delta角度超过30°。因此可以将RPP_delta阈值角度设立为30°,当RPP_delta角度小于30°时,认为是人体正常活动状态;当RPP_delta角度大于30°时将加速度数据输入加速度区分单元,进行下一步重力加速度关联性讨论判断。When the human body is standing normally, the RPP_delta angle does not exceed 30°. When bending or falling occurs, the RPP_delta angle exceeds 30°. Therefore, the RPP_delta threshold angle can be set to 30°. When the RPP_delta angle is less than 30°, it is considered to be the normal activity state of the human body; when the RPP_delta angle is greater than 30°, the acceleration data is input into the acceleration differentiation unit for the next step of discussion and judgment on the correlation of gravity acceleration. .

由于弯腰与跌倒动作,从姿态角度的变化是一致的,所有要提取正常的弯腰与跌倒姿态需要从他们的动作剧烈程度上进行区分,动作剧烈程度需要利用重力加速度进行判别。Since the bending and falling movements have the same changes in posture angle, all the normal bending and falling postures need to be distinguished from the intensity of their movements. The intensity of the movements needs to be judged by the acceleration of gravity.

当跌倒或者后仰时,yaw角度会有下降的过程,当yaw角度下降到阈值70°时,加速度区分单元首先根据Z轴方向的加速度和Y轴方向的加速度,按照公式(3)计算Z轴与Y轴的合成加速度:SqrtAccYZ。然后依据SqrtAccYZ、Z轴方向的加速度和公式(4)计算ACCYZZ_delta,然后与正常阈值和跌倒阈值比较,判断人体是否发生跌倒动作;When falling or leaning back, the yaw angle will decrease. When the yaw angle drops to the threshold value of 70°, the acceleration differentiation unit first calculates the Z-axis according to the acceleration in the Z-axis direction and the acceleration in the Y-axis direction according to formula (3) The combined acceleration with the Y axis: SqrtAccYZ. Then ACCYZZ_delta is calculated based on SqrtAccYZ, the acceleration in the Z-axis direction and formula (4), and then compared with the normal threshold and fall threshold to determine whether the human body has fallen;

AccY和AccZ分别为Y方向和Z方向的加速度;Z轴加速度可以体现此刻有无向后的剧烈运动、是否处于失重或与地面和座椅发生剧烈撞击;同时在人体向后跌倒过程中,Y轴方向向下加速度也将有大幅度变化。AccY and AccZ are the accelerations in the Y and Z directions respectively; the Z-axis acceleration can reflect whether there is violent backward movement at this moment, whether it is weightless, or a violent impact with the ground and the seat; at the same time, when the human body falls backward, Y The downward acceleration in the axis direction will also change significantly.

AccYZZ_delta=fabs(SqrtAccYZ-AccZ) (4)AccYZZ_delta=fabs(SqrtAccYZ-AccZ) (4)

观察ACCYZZ_delta角度变化情况,后仰和后躺时ACCYZZ_delta幅值为80左右,而出现跌倒动作时,ACCYZZ_delta幅值为40左右,因此可以设立ACCYZZ_delta的正常阈值为80,ACCYZZ_delta大于80判断为后仰或后躺,设置ACCYZZ_delta的跌倒阈值为40,当ACCYZZ_delta小于40时判断为跌倒,进一步将后仰等正常动作与跌倒动作区分开。Observe the changes in ACCYZZ_delta angle. When leaning back and lying down, the ACCYZZ_delta amplitude is about 80. When a fall occurs, the ACCYZZ_delta amplitude is about 40. Therefore, the normal threshold of ACCYZZ_delta can be set to 80. If ACCYZZ_delta is greater than 80, it is judged as leaning back or lying back. When lying back, set the fall threshold of ACCYZZ_delta to 40. When ACCYZZ_delta is less than 40, it is judged as a fall. This further distinguishes normal actions such as leaning back from falling actions.

实施例2Example 2

在预测人体的跌倒位置后,本实施例通过设置的陪伴机器人前往步骤S5预测的位置坐标处,进行保护。After predicting the fall position of the human body, in this embodiment, the companion robot is configured to go to the position coordinates predicted in step S5 for protection.

陪伴机器人基本包含以下组成部分:底盘,轮子,电池,直流式电机,直流电机驱动器模块,蓝牙通信模块,核心控制器模块,传感器,记忆海绵垫。记忆棉具有出色的吸能性,能够有效吸收冲击和能量,减轻跌倒时的冲击力;能够提供较好的减震效果,逐渐适应头部形状,减缓冲击传播;可以根据头部的形状和重量提供个性化的支撑,因此能够在跌倒时提供较好的支持。The companion robot basically consists of the following components: chassis, wheels, battery, DC motor, DC motor driver module, Bluetooth communication module, core controller module, sensors, and memory foam pads. Memory foam has excellent energy absorption, can effectively absorb impact and energy, and reduce the impact of a fall; it can provide better shock absorption effect, gradually adapt to the shape of the head, and slow down the spread of impact; it can be adjusted according to the shape and weight of the head Provides personalized support and therefore better support in the event of a fall.

陪伴机器人使用路径规划算法(A*算法)计算出机器人从当前位置到目标位置的路径。The companion robot uses the path planning algorithm (A* algorithm) to calculate the robot's path from the current location to the target location.

A*算法是利用启发式函数来进行搜索路径的,A*算法是适用于全局环境信息已知的路径规划。在算法运行过程中会生成两个list,Openlist和Closelist。Openlist中是已经被访问但未被选为下一节点的节点,Closelist是未被访问或被访问后未被扩展的节点。The A* algorithm uses heuristic functions to search for paths. The A* algorithm is suitable for path planning with known global environmental information. During the operation of the algorithm, two lists will be generated, Openlist and Closelist. Openlist contains nodes that have been visited but have not been selected as the next node. Closelist contains nodes that have not been visited or have not been expanded after being visited.

A*算法通过代价估计函数进行路径规划,从初始位置向附近子节点进行扩展,代价估计函数会根据子节点的距离代价将代价值最小的节点选为下一父节点,重复此过程直到完成路径规划,生成最终路径。The A* algorithm performs path planning through the cost estimation function, extending from the initial position to nearby child nodes. The cost estimation function will select the node with the smallest cost value as the next parent node based on the distance cost of the child node, and repeat this process until the path is completed. Planning, generating the final path.

启发函数的一般表达式为:The general expression of the heuristic function is:

f(n)=g(n)+h(n)f(n)=g(n)+h(n)

式中f(n)表示从起始栅格点经任意节点n到达目标点的代价估计,g(n)为起点到栅格n的实际距离评估,h(n)表示栅格n到终点最佳线路的距离评估。In the formula, f(n) represents the cost estimate from the starting grid point to the target point via any node n, g(n) is the actual distance evaluation from the starting point to grid n, h(n) represents the longest distance from grid n to the end point. Distance assessment of optimal routes.

路径规划的流程如图4所示:The path planning process is shown in Figure 4:

(1)设置Openlist和Closelist,Openlist用来存放已经搜索过的节点,Closelist用来存放未被探索的节点。(1) Set up Openlist and Closelist. Openlist is used to store nodes that have been searched, and Closelist is used to store unexplored nodes.

(2)设置Nx为初始栅格点,Ny为指定目标点。将初始栅格点置于Openlist中。(3)对初始栅格点附近节点进行访问,非障碍物点置于Openlist,利用启发函数计算它们的估价距离并将初始栅格点置于Closelist。(2) Set Nx as the initial grid point and Ny as the specified target point. Place initial grid points in Openlist. (3) Visit nodes near the initial grid point, place non-obstacle points in the Openlist, use the heuristic function to calculate their estimated distance and place the initial grid point in the Closelist.

(4)根据最优选取策略将下一扩展点N置于Closelist,如果N为指定目标栅格,路径规划结束,算法运行成功,如果不是则进行以下步骤。(4) Place the next expansion point N in the Closelist according to the optimal selection strategy. If N is the specified target raster, the path planning ends and the algorithm runs successfully. If not, proceed to the following steps.

(5)对栅格N继续执行访问其周围节点的策略,对于不在Closelist中的周围子节点,将其加入到Openlist,进行代价计算。(5) Continue to execute the strategy of visiting its surrounding nodes for grid N. For surrounding sub-nodes that are not in Closelist, add them to Openlist for cost calculation.

(6)返回至步骤(4),直到扩展至目标点。(6) Return to step (4) until expanded to the target point.

(7)算法运行成功,保存地图与扩展节点,回溯连接即为最终路径。(7) The algorithm runs successfully, the map and extended nodes are saved, and the backtracking connection is the final path.

陪伴机器人在体验空间中的位置,通过在房间角落装上雷达,并建立导航坐标系,可以得到陪伴机器人在导航空间中的位置坐标。The position of the companion robot in the experience space can be obtained by installing a radar in the corner of the room and establishing a navigation coordinate system.

Claims (7)

1. A fall position identification method, characterized by comprising the steps of:
s1, training an LSTM neural network;
and (3) data collection: collecting a time sequence of position coordinate data of the vr head-mounted device, wherein the position coordinate data of each time step in the time sequence comprises coordinates (xi, yi, zi) of the vr head-mounted device in a navigation coordinate system, i is a number of the time step, x is a coordinate in the northbound direction, y is a coordinate in the northbound direction, and z is a coordinate perpendicular to the horizontal plane;
data preparation: respectively carrying out standardization processing on the numerical values corresponding to the position coordinate data of each time step in the time sequence in the three coordinate axes of the navigation coordinate system, so that the mean value of the numerical values corresponding to the position coordinate data of each time step in the time sequence after the standardization processing in the three coordinate axes of the navigation coordinate system is 0, and the variance is 1;
dividing the data set: dividing the time sequence after the normalization processing into an input sequence and a target sequence;
LSTM neural network training: initializing LSTM model parameters, performing forward propagation on an input sequence through the LSTM model to obtain a predicted sequence, calculating a loss function according to the predicted sequence and a target sequence, updating weight parameters of the LSTM model according to the calculated loss function gradient through a reverse propagation algorithm, and repeating the training process until the preset training times are reached or until convergence is achieved;
s2, taking the ground as a reference, establishing a navigation coordinate system in a VR experience space, and collecting real-time data of the VR head-mounted device when a human body wears the VR head-mounted device, wherein the real-time data comprise attitude angles, acceleration and position coordinate data in the navigation coordinate system;
s3, inputting the attitude angle and the acceleration into a falling pre-judging model, and extracting real-time data triggering falling conditions;
s4, extracting position coordinate data corresponding to time steps in the sliding window range from the real-time data extracted in the step S3 by adopting a sliding window method to form a sample time sequence, wherein the time steps in the sliding window range comprise the time steps of the real-time data extracted in the step S3;
s5, carrying out standardization processing on the S4 sample time sequence, inputting the standardized LSTM neural network, and carrying out inverse standardization processing on the output result to obtain a prediction result of the position coordinate data of the VR head-mounted device.
2. The fall position identification method according to claim 1, wherein the step of normalizing comprises:
respectively solving the mean value and the variance of the numerical values corresponding to the position coordinate data of all the time steps in three coordinate axes of a navigation coordinate system;
converting the numerical value corresponding to the position coordinate data of each time step in the time sequence in three coordinate axes of the navigation coordinate system into a standardized numerical value by using the following formula:
normalized value = (original value-mean)/variance;
the original value is a numerical value corresponding to one coordinate axis selected by the position coordinate data of each time step in the navigation coordinate system, the average value is a mean value of numerical values corresponding to the selected one coordinate axis of the position coordinate data of all time steps in the navigation coordinate system, and the variance is a variance of numerical values corresponding to the selected one coordinate axis of the position coordinate data of all time steps in the navigation coordinate system.
3. A fall position identification method as claimed in claim 1, characterized in that, when dividing the data set, a sliding window method is used, in which the time sequence after normalization within one time window is taken as the input sequence, and the time sequence of the next time step of the input sequence is taken as the target sequence.
4. A fall position identification method as claimed in claim 1, characterized in that, when dividing the data set, a sliding window method is used to divide the time sequence after normalization processing within one time window into time sequences of three successive time steps, the time sequence of the first two time steps being the input sequence and the time sequence of the last time step being the target sequence.
5. The fall position recognition method according to claim 1, wherein the posture angle in step S2 includes a Pitch angle, a Roll angle, a Yaw angle, the Yaw angle representing an offset angle from the Y axis; the pitch angle represents the offset angle from the X-axis; the roll angle represents the deflection angle with the Z axis, the X axis is the axis perpendicular to the left and right sides of the human body, the Z axis is the axis perpendicular to the front and back sides of the human body, the Y axis is perpendicular to the X and Z axes, and the acceleration is the acceleration AccX, accY, accZ along the X, Y, Z axis.
6. The fall position recognition method according to claim 1, wherein the fall pre-determination model in step S3 includes an attitude angle distinguishing unit and an acceleration distinguishing unit, and the fall pre-determination model performs the steps of determining whether a fall condition is triggered:
after the attitude angle is input into the attitude angle distinguishing unit, the forward bending and backward falling of the human body can be distinguished from other attitudes by utilizing the formula (1) to obtain the sqrtRP;
the sqrtRP calculates RPP_delta according to the formula (2), and compares the RPP_delta with a set RPP_delta threshold value, so that the bending and falling actions of the human body in the lateral and backward directions can be effectively extracted.
RPP_delta=fabs(SqrtRP-yaw) (2)
Inputting the acceleration into the acceleration distinguishing unit when the RPP_delta angle is larger than 30 degrees;
when the yaw angle drops to not more than 70 °, the acceleration distinguishing unit first calculates the resultant acceleration of the Z axis and the Y axis according to formula (3) based on the acceleration of the Z axis direction and the acceleration of the Y axis direction: sqrtAccYZ, calculating ACCYZZ_delta according to the SqrtAccYZ, the acceleration in the Z axis direction and the formula (4), and comparing the ACCYZZ_delta with a normal threshold value and a falling threshold value to judge whether the falling action of the human body occurs;
AccYZZ_delta=fabs(SqrtAccYZ-AccZ) (4)
observing the change condition of ACCYZZ_delta angle, and judging that the patient falls when ACCYZZ_delta is smaller than 40, and triggering a falling condition.
7. A companion robot, characterized in that the companion robot goes to the position coordinates predicted in step S5 of claim 1 to be protected, and the companion robot carries a memory foam cushion for absorbing impact and energy and reducing the impact force of the head when falling down.
CN202311706801.5A 2023-12-13 2023-12-13 Fall position identification method and accompanying robot Pending CN117549306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311706801.5A CN117549306A (en) 2023-12-13 2023-12-13 Fall position identification method and accompanying robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311706801.5A CN117549306A (en) 2023-12-13 2023-12-13 Fall position identification method and accompanying robot

Publications (1)

Publication Number Publication Date
CN117549306A true CN117549306A (en) 2024-02-13

Family

ID=89823221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311706801.5A Pending CN117549306A (en) 2023-12-13 2023-12-13 Fall position identification method and accompanying robot

Country Status (1)

Country Link
CN (1) CN117549306A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119526430A (en) * 2025-01-22 2025-02-28 北京信通泰克科技有限公司 Robot-based operation fall monitoring and early warning method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119526430A (en) * 2025-01-22 2025-02-28 北京信通泰克科技有限公司 Robot-based operation fall monitoring and early warning method and system
CN119526430B (en) * 2025-01-22 2025-04-29 北京信通泰克科技有限公司 Robot-based operation falling monitoring and early warning method and system

Similar Documents

Publication Publication Date Title
CN104061934B (en) Pedestrian indoor position tracking method based on inertial sensor
EP3680618A1 (en) Method and system for tracking a mobile device
CN105910601B (en) A kind of indoor ground magnetic positioning method based on Hidden Markov Model
CN104090262B (en) A kind of method for tracking moving target merging estimation based on multi-sampling rate multi-model
CN107014375B (en) Indoor positioning system and method with ultra-low deployment
CN107958221A (en) A kind of human motion Approach for Gait Classification based on convolutional neural networks
CN106908762B (en) A Multi-hypothesis UKF Target Tracking Method for UHF-RFID Systems
CN102288176A (en) Coal mine disaster relief robot navigation system based on information integration and method
CN112967392A (en) Large-scale park mapping and positioning method based on multi-sensor contact
CN106468951A (en) A kind of intelligent remote control systems based on the fusion of both hands ring sensor and its method
JP2013531781A (en) Method and system for detecting zero speed state of object
CN104864873B (en) A method of using human motion features to assist map positioning
CN111982102B (en) BP-EKF-based UWB-IMU positioning method in complex environment
CN106643739A (en) Indoor environment personnel location method and system
CN110487273B (en) An indoor pedestrian trajectory estimation method assisted by a spirit level
CN107702712A (en) Indoor pedestrian's combined positioning method based on inertia measurement bilayer WLAN fingerprint bases
CN117549306A (en) Fall position identification method and accompanying robot
CN106406311A (en) Robot walking obstacle avoidance method based on information fusion and environmental perception
CN108362289A (en) A kind of mobile intelligent terminal PDR localization methods based on Multi-sensor Fusion
JP2013234919A (en) Positioning system, positioning method, and program
CN117260757A (en) Robot inspection system based on inspection data
KR102095135B1 (en) Method of positioning indoor and apparatuses performing the same
Saadatzadeh et al. An improvement in smartphone-based 3D indoor positioning using an effective map matching method
CN105357753B (en) A kind of indoor orientation method based on multimodality fusion recursive iteration
CN113029173A (en) Vehicle navigation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination