[go: up one dir, main page]

CN114092971B - A human motion assessment method based on visual images - Google Patents

A human motion assessment method based on visual images Download PDF

Info

Publication number
CN114092971B
CN114092971B CN202111423509.3A CN202111423509A CN114092971B CN 114092971 B CN114092971 B CN 114092971B CN 202111423509 A CN202111423509 A CN 202111423509A CN 114092971 B CN114092971 B CN 114092971B
Authority
CN
China
Prior art keywords
rotation angle
counterclockwise rotation
key point
angle
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111423509.3A
Other languages
Chinese (zh)
Other versions
CN114092971A (en
Inventor
仲元红
钟代笛
徐乾锋
冉琳
王新月
郭雨薇
魏晓燕
赵艳霞
黄智勇
周庆
葛亮
唐枋
刘继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111423509.3A priority Critical patent/CN114092971B/en
Publication of CN114092971A publication Critical patent/CN114092971A/en
Application granted granted Critical
Publication of CN114092971B publication Critical patent/CN114092971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及计算机视觉图像处理技术领域,具体涉及一种基于视觉图像的人体动作评估方法,包括:获取测试者的待测视频;对待测视频的视频帧进行骨架分析和姿态分析,生成对应的人体关键点坐标图;基于人体关键点坐标图结合相应的动作评估标准计算对应的动作评估辅助信息;基于动作评估辅助信息结合对应的动作评估决策依据完成动作评估,以生成对应的动作评估结果。本发明中的人体动作评估方法能够适用于多种动作评估,从而能够提升人体动作评估的效率。

The present invention relates to the technical field of computer vision image processing, and in particular to a human motion assessment method based on visual images, comprising: obtaining a video of a tester to be tested; performing skeleton analysis and posture analysis on a video frame of the video to be tested, and generating a corresponding human key point coordinate diagram; calculating corresponding motion assessment auxiliary information based on the human key point coordinate diagram in combination with corresponding motion assessment standards; completing motion assessment based on the motion assessment auxiliary information in combination with corresponding motion assessment decision basis to generate a corresponding motion assessment result. The human motion assessment method in the present invention can be applied to a variety of motion assessments, thereby improving the efficiency of human motion assessment.

Description

Human body action evaluation method based on visual image
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a human body action evaluation method based on a vision image.
Background
Along with the wide application of internet big data information technology, the application scenes of human body behavior detection and recognition technology based on visual images are more and more. By analyzing the motion characteristics of human expression, posture and the like, the method can provide rich identification characteristic information for technical applications such as behavior detection and prognosis of people in public places or specific activity spaces, and is an important component of human activity big data information.
For example, in the fields of athletic sports, health screening, etc., it is necessary to identify human motions in order to evaluate the standard conditions of the motions. Action evaluation needs to be performed based on action evaluation criteria and action evaluation decision basis. The early-stage common assessment method is that an evaluator observes the action of a tester by eyes and compares the action of the tester with a standard action pattern artificially to give scores, and meanwhile, the evaluator holds a camera to extract videos and then stores the videos as a backup. The existing method not only wastes manpower and material resources, but also can cause insufficient objective and accurate evaluation results due to subjectivity during artificial judgment.
With the development of computer technology, methods for human motion assessment based on visual images are presented in the prior art. For example, china patent publication No. CN110941990A discloses a method and a device for evaluating human body actions based on skeleton key points, which comprises the steps of collecting action pictures of a target main body in the human body movement process, extracting skeleton key point coordinates of actions of the target main body according to the action pictures, and inputting the skeleton key point coordinates into a pre-trained evaluation model for evaluating the actions of the target main body, wherein the evaluation model is used for evaluating the human body actions based on a human body posture azimuth angle calculated by the skeleton key point coordinates.
The human body action evaluation method in the prior art calculates the corresponding human body posture azimuth angle based on the skeleton key points, thereby realizing the evaluation of human body actions. However, the applicant found that the human body posture azimuth angle is used as an action evaluation auxiliary information, which is generally applicable only to evaluate a corresponding action, but is difficult to apply to evaluate a plurality of different actions. Because, in evaluating some actions, it is also necessary to calculate the distance or positional relationship between key points, even the similarity between video frames, and the like. However, it is related to the corresponding action evaluation criteria and action evaluation decision basis that what action evaluation auxiliary information is specifically calculated to complete the action evaluation, and there is no general evaluation method applicable to multiple action evaluations in the prior art, so that a corresponding dedicated evaluation method needs to be designed for each action, resulting in low efficiency of action evaluation. Therefore, how to design a general motion estimation method suitable for multiple motion estimation is a technical problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a human motion assessment method which can be suitable for multiple motion assessments, so that the efficiency of human motion assessment can be improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
A human motion assessment method based on visual images, comprising the steps of:
s1, acquiring a video to be tested of a tester;
S2, performing skeleton analysis and gesture analysis on video frames of the video to be detected to generate a corresponding human body key point coordinate graph;
S3, calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and combining corresponding action evaluation standards;
and S4, completing action evaluation according to the action evaluation auxiliary information and the corresponding action evaluation decision basis so as to generate a corresponding action evaluation result.
Preferably, in step S3, the motion estimation assistance information includes a counterclockwise rotation angle between key points;
The counterclockwise rotation angle is calculated by:
s301, acquiring key point coordinates A, B, C for calculating a counterclockwise rotation angle;
S302, calculating corresponding key point vectors based on key point coordinates A, B, C Key point vector
S303, vector key pointsRotated in the counterclockwise direction until it is aligned with the key point vectorCoinciding, then the key point vectorRotated to a keypoint vectorAs the corresponding counter-clockwise rotation angle.
Preferably, in step S3, the motion estimation auxiliary information includes a similarity between a counterclockwise rotation angle to be measured in the video to be measured and a counterclockwise rotation angle of a corresponding template in the template video, wherein the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template is calculated based on a dynamic time warping algorithm.
Preferably, the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template is calculated by the following steps:
S311, acquiring a counter-clockwise rotation angle sequence P (P 1,p2,…,pn) to be detected and a corresponding template counter-clockwise rotation angle sequence Q (Q 1,q2,…,qm);pi represents a counter-clockwise rotation angle to be detected corresponding to an ith frame of video frame in the video to be detected, and Q i represents a template counter-clockwise rotation angle corresponding to the ith frame of video frame in the template video;
S312, constructing an n multiplied by m two-dimensional matrix C based on the counter-clockwise rotation angle sequence to be detected and the counter-clockwise rotation angle sequence of the template, wherein C (i, j) represents Euclidean distance between the ith counter-clockwise rotation angle to be detected in the counter-clockwise rotation angle sequence to be detected and the jth counter-clockwise rotation angle of the template in the counter-clockwise rotation angle sequence of the template;
S313, in the two-dimensional matrix C, calculating the accumulated distance from the starting point position C (0, 0) to the end point position C (n, m) and recording the corresponding matching path;
and S314, calculating a corresponding similarity score based on the minimum accumulated distance D and the path step number K.
Preferably, the cumulative distance is calculated by the following formula:
d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};
the optimal matching path is selected by the following formula:
the similarity score is calculated by the following formula:
in the above expression, d (i, j) represents the cumulative distance from the start position C (0, 0) to the end position C (i, j), C k represents the kth element in the two-dimensional matrix C, s represents the similarity score, and h represents the adjustment coefficient, which is set to 0.2.
Preferably, the types of the anticlockwise rotation angle comprise a left forearm-left big arm angle, a left big arm-left shoulder angle, a left big arm-trunk angle, a trunk-left thigh angle, a left thigh-left calf angle, a right big arm-right forearm angle, a right shoulder-right big arm angle, a trunk-right thigh angle, a right thigh-right calf angle, and a similarity between the anticlockwise rotation angle to be measured and the anticlockwise rotation angle of the template, and the similarity between the anticlockwise rotation angle to be measured and the anticlockwise rotation angle of the template can only be calculated at a time.
Preferably, in step S3, when calculating the action evaluation auxiliary information, the recommended key points are selected to participate in the calculation through the following steps:
S321, calculating variances of all anticlockwise rotation angles in the human body key point coordinate graph;
S322, calculating motion information proportion corresponding to the anticlockwise rotation angle based on the variance of each anticlockwise rotation angle;
s323, selecting a key point corresponding to the anticlockwise rotation angle with the largest motion information proportion as a recommended key point.
Preferably, the variance of the counterclockwise rotation angle is calculated by the following formula:
calculating a motion information proportion of the counterclockwise rotation angle by the following formula:
in the formula, sigma 2 represents the variance of the anticlockwise rotation angle, R represents the anticlockwise rotation angle, u r represents the average value of the anticlockwise rotation angles in the human body key point coordinate graph, N represents the number of the anticlockwise rotation angles in the human body key point coordinate graph, and I n represents the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph; The variance of the n counter-clockwise rotation angle in the human body key point coordinate graph is represented, and e represents a natural constant.
Preferably, in step S3, the motion estimation auxiliary information includes euclidean distances between key points;
By the formula Calculating Euclidean distance between key points;
in the above formula, d (A, B) represents the Euclidean distance between the key points A (x 1,x2,…,xn) and B (y 1,y2,…,yn).
Preferably, in step S3, the motion estimation auxiliary information includes a positional relationship between the key points, and the positional relationship between the key points includes a slope and a difference.
Compared with the prior art, the human body action evaluation method has the following beneficial effects:
1. The human body action evaluation method suitable for the evaluation of the multiple actions can be provided according to the action evaluation standard and the action evaluation decision basis of the corresponding actions, so that the corresponding evaluation method does not need to be designed for each action, and the efficiency of human body action evaluation can be improved.
2. According to the method, the human body key point coordinate graph is generated in a skeleton analysis and gesture analysis mode, and then the action evaluation auxiliary information is calculated by combining corresponding action evaluation standards, and the action evaluation is completed by combining action evaluation decision basis, so that the calculation of the action evaluation auxiliary information and the action evaluation thereof can be associated with the corresponding action evaluation standards and the action evaluation decision basis, and the calculation accuracy of the action evaluation auxiliary information and the accuracy and effect of the action evaluation can be ensured.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a human motion assessment method;
FIG. 2is a schematic view of ten counterclockwise rotation angles on a human body;
FIG. 3 is a schematic illustration of the limb angle between the right large arm and the right small arm;
fig. 4 is a schematic diagram of a counterclockwise rotation angle between the right large arm and the right small arm.
Detailed Description
The following is a further detailed description of the embodiments:
Examples:
first, the meaning of the action evaluation criterion and the action evaluation decision basis will be described.
Action evaluation criteria are items to be evaluated during action evaluation.
Taking squat as an example, the action evaluation criteria include 1) whether the test stick is directly above the top of the head, 2) whether the trunk is parallel to the lower leg or perpendicular to the ground, 3) whether the thigh is below horizontal when squatting, and 4) whether the knees remain consistent with the direction of the feet.
The action evaluation decision basis refers to scoring criteria in action evaluation.
Taking deep squat as an example, the action evaluation decision basis comprises 1) that a test rod is right above the top of the head, the trunk is parallel to the lower leg or vertical to the ground, the thighs are lower than the horizontal line when squatting down, the directions of the knees and feet are kept consistent to obtain 3 points, 2) the required action can not be completed or the heel lower board is completed to obtain 2 points, 3) the heel lower board upper board still can not complete the required action to obtain 1 point, and 4) pain occurs to any part of the body in the test process to obtain 0 point.
Based on the above description, a human motion evaluation method based on visual images is disclosed in the present embodiment.
As shown in fig. 1, the human motion estimation method based on the visual image includes the following steps:
s1, acquiring a video to be tested of a tester;
And S2, performing skeleton analysis and gesture analysis on video frames of the video to be detected to generate a corresponding human body key point coordinate graph, wherein in the embodiment, the skeleton analysis and gesture analysis are performed on the video frames of the video to be detected through AlphaPose models proposed by Shanghai transportation university.
S3, calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and combining corresponding action evaluation standards;
and S4, completing action evaluation according to the action evaluation auxiliary information and the corresponding action evaluation decision basis so as to generate a corresponding action evaluation result.
According to the human body action evaluation method, the evaluation of various actions can be completed according to the action evaluation standard and the action evaluation decision basis of the corresponding actions, namely the human body action evaluation method suitable for the evaluation of the various actions is provided, so that the corresponding evaluation method does not need to be designed for each action, and the efficiency of human body action evaluation can be improved. Meanwhile, the human body key point coordinate graph is generated in a skeleton analysis and gesture analysis mode, and then motion evaluation auxiliary information is calculated by combining corresponding motion evaluation standards, motion evaluation is completed by combining motion evaluation decision basis, so that the calculation of the motion evaluation auxiliary information and the motion evaluation thereof can be associated with the corresponding motion evaluation standards and the motion evaluation decision basis, and the calculation accuracy of the motion evaluation auxiliary information and the accuracy and effect of the motion evaluation can be ensured.
In a specific implementation process, the motion estimation auxiliary information includes a counterclockwise rotation angle between key points, and the type of the counterclockwise rotation angle includes a left forearm and left forearm angle, a left forearm and left shoulder angle, a left forearm and torso angle, a torso and left thigh angle, a left thigh and left calf angle, a right forearm and right forearm angle, a right shoulder and right thigh angle, a torso and right thigh angle, and a right thigh and right calf angle, as shown in fig. 2.
The counterclockwise rotation angle is calculated by:
s301, acquiring key point coordinates A, B, C for calculating a counterclockwise rotation angle;
S302, calculating corresponding key point vectors based on key point coordinates A, B, C Key point vector
S303, vector key pointsRotated in the counterclockwise direction until it is aligned with the key point vectorCoinciding, then the key point vectorRotated to a keypoint vectorAs the corresponding counter-clockwise rotation angle.
The skeleton analysis and the gesture analysis are carried out on the video frame to obtain a two-dimensional gesture, and the key points in the human body key point coordinate graph are actually projections of the real gesture on a two-dimensional plane, so that a simple limb angle is difficult to accurately represent a motion limb characteristic. As shown in fig. 3, the limb angle between the right large arm and the right small arm is the same when the right arm is bent in front of the chest and bent on one side of the body, respectively. From the data, the action limb characteristics of the right arm are the same, because the limb angles between the right big arm and the right small arm are the same, and the two actions can be greatly different in practice.
Therefore, the invention adds the direction information, namely the rotation direction, on the basis of the limb angle, so that the generated anticlockwise rotation angle simultaneously has the angle information and the direction information (shown in fig. 4), thereby solving the problem that part of posture information is lost when the real posture is projected to the two-dimensional plane, accurately representing the motion limb characteristics and ensuring the accuracy of human motion assessment. Meanwhile, the ten anticlockwise rotation angles designed by the invention can basically cover important action limb characteristics of the human body posture, so that the effect of human body action evaluation can be further ensured.
In the specific implementation process, the action evaluation auxiliary information comprises the similarity between the counter-clockwise rotation angle to be detected in the video to be detected and the counter-clockwise rotation angle of the corresponding template in the template video, wherein the similarity between the counter-clockwise rotation angle to be detected and the counter-clockwise rotation angle of the corresponding template is calculated based on a dynamic time warping algorithm. In this embodiment, when calculating the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the template, only the similarity between a certain type of counterclockwise rotation angle can be calculated at a time.
The similarity between the anticlockwise rotation angle to be measured and the anticlockwise rotation angle of the corresponding template is calculated through the following steps:
S311, acquiring a counter-clockwise rotation angle sequence P (P 1,p2,…,pn) to be detected and a corresponding template counter-clockwise rotation angle sequence Q (Q 1,q2,…,qm);pi represents a counter-clockwise rotation angle to be detected corresponding to an ith frame of video frame in the video to be detected, and Q i represents a template counter-clockwise rotation angle corresponding to the ith frame of video frame in the template video;
S312, constructing an n multiplied by m two-dimensional matrix C based on the counter-clockwise rotation angle sequence to be detected and the counter-clockwise rotation angle sequence of the template, wherein C (i, j) represents Euclidean distance between the ith counter-clockwise rotation angle to be detected in the counter-clockwise rotation angle sequence to be detected and the jth counter-clockwise rotation angle of the template in the counter-clockwise rotation angle sequence of the template;
S313, in the two-dimensional matrix C, calculating the accumulated distance from the starting point position C (0, 0) to the end point position C (n, m) and recording the corresponding matching path;
and S314, calculating a corresponding similarity score based on the minimum accumulated distance D and the path step number K.
In the specific implementation process, the accumulated distance is calculated by the following formula:
d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};
the optimal matching path is selected by the following formula:
the similarity score is calculated by the following formula:
In the above expression, d (i, j) represents the cumulative distance from the start position C (0, 0) to the end position C (i, j), C k represents the kth element in the two-dimensional matrix C, s represents the similarity score, and h represents the adjustment coefficient, which is set to 0.2. DTW in the formula refers to the best matching path algorithm.
In actual action evaluation, the video to be tested and the template video are compared and the similarity is calculated, so that the action evaluation is completed through the similarity. Video images are typically in time series form, so that it is the similarity of the two time series that needs to be calculated. However, since different persons do the same action at different speeds, even if the same person repeatedly does the same action, there is a difference between body parts, resulting in that the lengths of the two time series do not substantially coincide. At this time, the existing method for calculating the similarity based on the euclidean distance cannot effectively calculate the similarity between time sequences.
Therefore, the method introduces a dynamic time warping algorithm to calculate the similarity between the video to be detected and the template video, calculates the accumulated minimum distance between the two sequences by adjusting the time corresponding relation (namely the length relation) of the two sequences so as to find the optimal matching path, so that the time sequence similarity of the two sequences with unequal lengths can be calculated, and the accuracy of human motion assessment can be ensured. Meanwhile, the anticlockwise rotation angle can accurately represent the action limb characteristics, so that the similarity between the video to be detected and the template video can be effectively represented through the similarity between the anticlockwise rotation angles, and further human action evaluation can be better assisted.
In the specific implementation process, when the action evaluation auxiliary information is calculated, the recommended key points are selected to participate in calculation through the following steps:
S321, calculating variances of all anticlockwise rotation angles in the human body key point coordinate graph;
S322, calculating motion information proportion corresponding to the anticlockwise rotation angle based on the variance of each anticlockwise rotation angle;
s323, selecting a key point corresponding to the anticlockwise rotation angle with the largest motion information proportion as a recommended key point.
The variance of the counterclockwise rotation angle is calculated by the following formula:
calculating a motion information proportion of the counterclockwise rotation angle by the following formula:
in the formula, sigma 2 represents the variance of the anticlockwise rotation angle, R represents the anticlockwise rotation angle, u r represents the average value of the anticlockwise rotation angles in the human body key point coordinate graph, N represents the number of the anticlockwise rotation angles in the human body key point coordinate graph, and I n represents the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph; The variance of the n counter-clockwise rotation angle in the human body key point coordinate graph is represented, and e represents a natural constant.
When the actual motion is evaluated, most of the motions only carry out main motions of a small part of limbs, and the motion amplitude of other limbs is not large or does not carry out motions. The limb angle change range of the main movement is larger, the limb change range of the non-movement is smaller, and the limb which performs the main movement is generally considered in the action evaluation.
Therefore, the method selects the recommended key points with large movement amplitude to participate in calculation by calculating the variance of the anticlockwise rotation angle, the movement information proportion and the Euclidean distance difference value among the key points, on one hand, the recommended key points can accurately reflect the limb which performs main movement, so that the accuracy of human body movement evaluation can be ensured, and on the other hand, the key points corresponding to the limb which does not perform movement or does not perform movement do not participate in calculation, so that the calculation amount of human body movement evaluation can be reduced.
In the specific implementation process, the action evaluation auxiliary information comprises Euclidean distances among key points;
By the formula Calculating Euclidean distance between key points;
in the above formula, d (A, B) represents the Euclidean distance between the key points A (x 1,x2,…,xn) and B (y 1,y2,…,yn).
In the implementation process, the action evaluation auxiliary information comprises the position relation among key points, wherein the position relation among the key points comprises a slope and a difference value.
According to the method and the device, the action evaluation auxiliary information such as Euclidean distance among the key points and the position relation among the key points can be calculated to assist in completing the action evaluation, so that the method and the device can be better applied to evaluation of various actions, and therefore efficiency and accuracy of human action evaluation are guaranteed.
Specifically, when a human body key point coordinate graph is generated, firstly inputting a corresponding video frame into a pre-trained gesture estimation model to output a corresponding heat map;
the method comprises the steps of acquiring a gesture data set for training when a gesture estimation model is trained, converting a label marked in advance on a training chart in the gesture data set into a corresponding heat chart label to obtain a corresponding label heat chart;
when a label heat map is generated, firstly, setting the size (W h×Hh) of the label heat map to generate an all-zero matrix with the size of W h×Hh, and then calculating the heat distribution of a pre-marked label on the label heat map through the following formula to generate a corresponding label heat map;
Obtaining a heat map with the size of W h×Hh when the coordinates of the key points are calculated, reducing the dimension of the heat map to 1*W h*Hh, calculating the maximum heat value index of the corresponding key points in the heat map through the following formula, and finally calculating the coordinates of the corresponding key points through the index corresponding to the maximum heat value in the heat map and the heat map dimension, wherein the quotient obtained by dividing the index by the corresponding W h is the number x of the key points with the size of W h×Hh, and the remainder is the number y of the key points with the size of W h×Hh, namely the coordinates (x, y) of the key points are obtained;
In the above, g represents a heat value, x 0、y0 represents real coordinates of a label marked in advance, x and y represent coordinates of the label in a label heat map, sigma represents standard deviation, the value of which is 2 or 3;e represents a natural constant, i and j respectively represent indexes of a one-dimensional heat map, x i、xj represents heat values corresponding to the indexes i and j, and beta represents a calibration coefficient.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will understand that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Meanwhile, the common general knowledge of the specific construction and characteristics of the embodiment is not described here too much. Finally, the scope of the invention as claimed should be determined by the claims, and the description of the embodiments and the like in the specification should be construed to explain the content of the claims.

Claims (6)

1.一种基于视觉图像的人体动作评估方法,其特征在于,包括以下步骤:1. A method for evaluating human motion based on visual images, comprising the following steps: S1:获取测试者的待测视频;S1: Obtain the test video of the tester; S2:对待测视频的视频帧进行骨架分析和姿态分析,生成对应的人体关键点坐标图;S2: Perform skeleton analysis and posture analysis on the video frames of the video to be tested, and generate a corresponding human body key point coordinate map; S3:基于人体关键点坐标图结合相应的动作评估标准计算对应的动作评估辅助信息;S3: Calculate the corresponding action evaluation auxiliary information based on the human body key point coordinate diagram and the corresponding action evaluation standard; 步骤S3中,动作评估辅助信息包括关键点间的逆时针旋转角;In step S3, the action evaluation auxiliary information includes the counterclockwise rotation angle between the key points; 通过如下步骤计算逆时针旋转角:Calculate the counterclockwise rotation angle by the following steps: S301:获取用于计算逆时针旋转角的关键点坐标A、B、C;S301: Obtain key point coordinates A, B, C for calculating the counterclockwise rotation angle; S302:基于关键点坐标A、B、C计算对应的关键点向量和关键点向量 S302: Calculate the corresponding key point vector based on the key point coordinates A, B, and C and the key point vector S303:将关键点向量沿逆时针方向旋转,直至与关键点向量重合;然后将关键点向量旋转至关键点向量的角度作为对应的逆时针旋转角;S303: The key point vector Rotate counterclockwise until it is aligned with the key point vector coincide; then the key point vector Rotate to keypoint vector The angle of is taken as the corresponding counterclockwise rotation angle; 动作评估辅助信息包括待测视频中待测逆时针旋转角与模板视频中对应模板逆时针旋转角间的相似度;其中,基于动态时间规整算法计算待测逆时针旋转角与对应模板逆时针旋转角间的相似度;The action assessment auxiliary information includes the similarity between the counterclockwise rotation angle to be tested in the video to be tested and the counterclockwise rotation angle of the corresponding template in the template video; wherein the similarity between the counterclockwise rotation angle to be tested and the counterclockwise rotation angle of the corresponding template is calculated based on a dynamic time warping algorithm; 计算动作评估辅助信息时,通过如下步骤选取推荐关键点参与计算:When calculating the auxiliary information for action evaluation, the recommended key points are selected for calculation through the following steps: S321:计算人体关键点坐标图中各个逆时针旋转角的方差;S321: Calculate the variance of each counterclockwise rotation angle in the human body key point coordinate diagram; S322:基于各个逆时针旋转角的方差计算对应逆时针旋转角的运动信息比例;S322: Calculating the motion information ratio corresponding to the counterclockwise rotation angle based on the variance of each counterclockwise rotation angle; S323:选取运动信息比例最大的逆时针旋转角所对应的关键点作为推荐关键点;S323: Selecting the key point corresponding to the counterclockwise rotation angle with the largest motion information ratio as the recommended key point; 通过如下公式计算逆时针旋转角的方差:The variance of the counterclockwise rotation angle is calculated using the following formula: 通过如下公式计算逆时针旋转角的运动信息比例:The motion information ratio of the counterclockwise rotation angle is calculated by the following formula: 上述式中:σ2表示逆时针旋转角的方差;R表示逆时针旋转角;ur表示人体关键点坐标图中逆时针旋转角的均值;N表示人体关键点坐标图中逆时针旋转角的个数;In表示人体关键点坐标图中第n个逆时针旋转角度的运动信息比例;表示人体关键点坐标图中第n个逆时针旋转角的方差;e表示自然常数;In the above formula: σ 2 represents the variance of the counterclockwise rotation angle; R represents the counterclockwise rotation angle; ur represents the mean of the counterclockwise rotation angle in the human body key point coordinate map; N represents the number of counterclockwise rotation angles in the human body key point coordinate map; I n represents the motion information ratio of the nth counterclockwise rotation angle in the human body key point coordinate map; represents the variance of the nth counterclockwise rotation angle in the human body key point coordinate diagram; e represents a natural constant; S4:基于动作评估辅助信息结合对应的动作评估决策依据完成动作评估,以生成对应的动作评估结果。S4: Complete the action evaluation based on the action evaluation auxiliary information combined with the corresponding action evaluation decision basis to generate a corresponding action evaluation result. 2.如权利要求1所述的基于视觉图像的人体动作评估方法,其特征在于,通过如下步骤计算待测逆时针旋转角与对应模板逆时针旋转角间的相似度:2. The human motion assessment method based on visual images according to claim 1, characterized in that the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template is calculated by the following steps: S311:获取待测逆时针旋转角序列P(p1,p2,…,pn),以及对应的模板逆时针旋转角序列Q(q1,q2,…,qm);pi表示待测视频中第i帧视频帧对应的待测逆时针旋转角;qi表示模板视频中第i帧视频帧对应的模板逆时针旋转角;S311: Obtain a counterclockwise rotation angle sequence P (p 1 , p 2 , …, p n ) to be tested, and a corresponding template counterclockwise rotation angle sequence Q (q 1 , q 2 , …, q m ); p i represents the counterclockwise rotation angle to be tested corresponding to the i-th video frame in the video to be tested; q i represents the template counterclockwise rotation angle corresponding to the i-th video frame in the template video; S312:基于待测逆时针旋转角序列和模板逆时针旋转角序列构建一个n×m的二维矩阵C;C(i,j)表示待测逆时针旋转角序列中第i个待测逆时针旋转角和模板逆时针旋转角序列中第j个模板逆时针旋转角间的欧氏距离;S312: constructing an n×m two-dimensional matrix C based on the counterclockwise rotation angle sequence to be tested and the template counterclockwise rotation angle sequence; C(i,j) represents the Euclidean distance between the i-th counterclockwise rotation angle to be tested in the counterclockwise rotation angle sequence to be tested and the j-th template counterclockwise rotation angle in the template counterclockwise rotation angle sequence; S313:在二维矩阵C中,计算从起点位置C(0,0)走到终点位置C(n,m)的累积距离,并记录对应的匹配路径;然后选取最小累积距离D对应的匹配路径作为最优匹配路径,并计算最优匹配路径的路径步数K;S313: In the two-dimensional matrix C, the cumulative distance from the starting position C(0,0) to the end position C(n,m) is calculated, and the corresponding matching path is recorded; then the matching path corresponding to the minimum cumulative distance D is selected as the optimal matching path, and the number of path steps K of the optimal matching path is calculated; S314:基于最小累积距离D和路径步数K计算对应的相似度分数。S314: Calculate the corresponding similarity score based on the minimum cumulative distance D and the number of path steps K. 3.如权利要求2述的基于视觉图像的人体动作评估方法,其特征在于:3. The method for assessing human motion based on visual images as claimed in claim 2, characterized in that: 通过如下公式计算累积距离:The cumulative distance is calculated using the following formula: d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)}; 通过如下公式选取最优匹配路径:The optimal matching path is selected by the following formula: 通过如下公式计算相似度分数:The similarity score is calculated using the following formula: 上述式中:d(i,j)表示从起点位置C(0,0)走到终点位置C(i,j)的累积距离;ck表示二维矩阵C中的第k个元素;s表示相似度分数;h表示调整系数,设置为0.2。In the above formula: d(i,j) represents the cumulative distance from the starting position C(0,0) to the end position C(i,j); c k represents the kth element in the two-dimensional matrix C; s represents the similarity score; h represents the adjustment coefficient, which is set to 0.2. 4.如权利要求1所述的基于视觉图像的人体动作评估方法,其特征在于:逆时针旋转角的类型包括左小臂与左大臂间角度、左大臂与左肩部间角度、左大臂与躯干间角度、躯干与左大腿间角度、左大腿与左小腿间角度、右大臂与右小臂间角度、右肩部与右大臂间角度、躯干与右大臂间角度、躯干与右大腿间角度、右大腿与右小腿间角度;4. The human motion assessment method based on visual images as claimed in claim 1, characterized in that: the types of counterclockwise rotation angles include the angle between the left forearm and the left upper arm, the angle between the left upper arm and the left shoulder, the angle between the left upper arm and the trunk, the angle between the trunk and the left thigh, the angle between the left thigh and the left calf, the angle between the right upper arm and the right forearm, the angle between the right shoulder and the right upper arm, the angle between the trunk and the right upper arm, the angle between the trunk and the right thigh, and the angle between the right thigh and the right calf; 计算待测逆时针旋转角与模板逆时针旋转角间的相似度时,单次仅能计算某一类型逆时针旋转角间的相似度。When calculating the similarity between the counterclockwise rotation angle to be tested and the counterclockwise rotation angle of the template, only the similarity between a certain type of counterclockwise rotation angles can be calculated at a time. 5.如权利要求1所述的基于视觉图像的人体动作评估方法,其特征在于:步骤S3中,动作评估辅助信息包括关键点间的欧氏距离;5. The method for human motion assessment based on visual images as claimed in claim 1, characterized in that: in step S3, the motion assessment auxiliary information includes the Euclidean distance between key points; 通过公式计算关键点间的欧氏距离;By formula Calculate the Euclidean distance between key points; 上述式中:d(A,B)表示关键点A)x1,x2,…,xn)和B(y1,y2,…,yn)间的欧氏距离。In the above formula: d(A,B) represents the Euclidean distance between key points A) x1 , x2 , ..., xn ) and B ( y1 , y2 , ..., yn ). 6.如权利要求1所述的基于视觉图像的人体动作评估方法,其特征在于:步骤S3中,动作评估辅助信息包括关键点间的位置关系;关键点间的位置关系包括斜率、差值。6. The human motion assessment method based on visual images as described in claim 1 is characterized in that: in step S3, the motion assessment auxiliary information includes the positional relationship between key points; the positional relationship between key points includes slope and difference.
CN202111423509.3A 2021-11-26 2021-11-26 A human motion assessment method based on visual images Active CN114092971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423509.3A CN114092971B (en) 2021-11-26 2021-11-26 A human motion assessment method based on visual images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423509.3A CN114092971B (en) 2021-11-26 2021-11-26 A human motion assessment method based on visual images

Publications (2)

Publication Number Publication Date
CN114092971A CN114092971A (en) 2022-02-25
CN114092971B true CN114092971B (en) 2025-02-07

Family

ID=80305058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423509.3A Active CN114092971B (en) 2021-11-26 2021-11-26 A human motion assessment method based on visual images

Country Status (1)

Country Link
CN (1) CN114092971B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373531B (en) * 2022-02-28 2022-10-25 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116110584B (en) * 2023-02-23 2023-09-22 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012046392A1 (en) * 2010-10-08 2012-04-12 パナソニック株式会社 Posture estimation device and posture estimation method
CN105809144B (en) * 2016-03-24 2019-03-08 重庆邮电大学 A gesture recognition system and method using motion segmentation
CN109840478B (en) * 2019-01-04 2021-07-02 广东智媒云图科技股份有限公司 Action evaluation method and device, mobile terminal and readable storage medium
CN110941990B (en) * 2019-10-22 2023-06-16 泰康保险集团股份有限公司 Method and device for evaluating human body actions based on skeleton key points

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于人体姿态估计的功能性动作筛查自动化评估;徐乾锋等;《中国仿真学会.第三十四届中国仿真大会暨第二十一届亚洲仿真会议论文集》;20221209;1018-1025 *
网球训练机器人中人体动作实时评估算法及其实现;王宇鹏;《中国优秀硕士学位论文全文数据库信息科技辑》;20181015(第10期);第2.3节、第三章及图3-3、表3-2 *

Also Published As

Publication number Publication date
CN114092971A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN104598867B (en) A kind of human action automatic evaluation method and dancing points-scoring system
CN111931804B (en) An automatic scoring method for human action based on RGBD camera
Mori et al. Recovering 3d human body configurations using shape contexts
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN111144217A (en) Motion evaluation method based on human body three-dimensional joint point detection
Elaoud et al. Skeleton-based comparison of throwing motion for handball players
CN114092971B (en) A human motion assessment method based on visual images
CN112906653A (en) Multi-person interactive exercise training and evaluation system
CN114093032B (en) A human motion assessment method based on motion state information
CN114092863B (en) A human motion estimation method for multi-view video images
CN114092862B (en) An action evaluation method based on optimal frame selection
WO2016107226A1 (en) Image processing method and apparatus
Karunaratne et al. Objectively measure player performance on olympic weightlifting
Skublewska-Paszkowska et al. Dual attention graph convolutional neural network to support mocap data animation
CN117373109A (en) Posture assessment method based on human skeleton points and action recognition
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
Li et al. Automatic tracking method for 3D human motion pose using contrastive learning
Yang et al. Wushu movement evaluation method based on Kinect
Tang et al. Research on dance movement evaluation method based on deep learning posture estimation
Wang et al. A Transformer Based Cycle Posture Detection System Using 3D Pose Estimation Algorithm
CN114724013B (en) Human motion similarity matching scoring method, device and readable storage medium
Zhang et al. Motion damage attitude acquisition based on three-dimensional image analysis
Zhang et al. Intelligent Pose Recognition and Evaluation System for Rowing Sports
CN113643788B (en) Method and system for determining feature points based on multiple image acquisition devices
JP2021099666A (en) Method for generating learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant