[go: up one dir, main page]

CN112990060B - Human body posture estimation analysis method for joint point classification and joint point reasoning - Google Patents

Human body posture estimation analysis method for joint point classification and joint point reasoning Download PDF

Info

Publication number
CN112990060B
CN112990060B CN202110338088.8A CN202110338088A CN112990060B CN 112990060 B CN112990060 B CN 112990060B CN 202110338088 A CN202110338088 A CN 202110338088A CN 112990060 B CN112990060 B CN 112990060B
Authority
CN
China
Prior art keywords
joint
points
human body
joint point
joint points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110338088.8A
Other languages
Chinese (zh)
Other versions
CN112990060A (en
Inventor
陈双叶
杨建敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110338088.8A priority Critical patent/CN112990060B/en
Publication of CN112990060A publication Critical patent/CN112990060A/en
Application granted granted Critical
Publication of CN112990060B publication Critical patent/CN112990060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a human body posture estimation analysis method for joint point classification and joint point reasoning, which comprises the steps of detecting human body joint points and clustering the detected joint points. Firstly, detecting joint points of pedestrians in a video to obtain position information and direction information of the joint points of all the pedestrians; and subtracting the position information of the joint points two by two, and taking an absolute value. Several sets of equal coordinate information are obtained, and these equal position information are used as fixed articulation points, if some articulation points are blocked, the fixed articulation points are determined by means of manually inserted articulation points. The remaining articulation points are non-stationary articulation points. Dividing the detected joint points into two main types, wherein one is a preliminary frame for fixing the joint points of the human body; the other is a non-stationary node. Other non-stationary nodes belonging to the preliminary frame of this body are found by stationary nodes. And according to the direction information between the nodes and the relation between the shared nodes, the human body gesture in the video is obtained.

Description

Human body posture estimation analysis method for joint point classification and joint point reasoning
Technical Field
The invention relates to a method for intelligently analyzing human body gestures in a video scene, and belongs to the field of intelligent security.
Background
With the rapid development of artificial intelligence, the specific gravity of image processing is expanding. How to make a robot recognize an image according to thinking of a human, and the accuracy is higher than that of human observation is a continuous breakthrough problem in the field of computer vision.
The coverage of the monitoring cameras in the security field is continuously expanded at present, and the monitoring cameras are also indispensable social necessities, and simultaneously the generated video data are exponentially increased. However, most cameras are transmission media, not an intelligent camera, and cannot accurately analyze the gesture of a pedestrian in a video scene. If the effective video data are utilized, the behaviors of pedestrians in the video are accurately judged, and if the behaviors are abnormal, a warning system can be triggered, so that the safety index of public places can be greatly improved.
Before deep learning rapidly develops, the pedestrian gesture estimation analysis adopts a traditional image processing technology, adopts a method of modeling some shapes and matching templates to evaluate the gesture of a human body, has poor robustness and poor effect, cannot well perform in complex reality scenes, and cannot be well applied. With the development of technology, two methods from top to bottom or from bottom to top are generally adopted in the aspect of researching multi-person gesture estimation. The former method is beneficial to the definition of the joint positioning and the inherent association, and the pose estimation becomes relatively easy as long as the target person is locked. However, this approach is not suitable for crowded environments, which is not very effective; the latter does not need to mark the characters for multi-person gesture estimation, only needs to detect and cluster key points of the human body, and has the difficulty of correctly selecting the joint points belonging to one person.
Disclosure of Invention
The adopted bottom-up method carries out human body posture estimation. Aiming at the difficulty of the method in joint clustering, a method capable of analyzing pedestrian gesture estimation is provided.
The technical scheme adopted by the invention is a human body posture estimation analysis method for joint point classification and joint point reasoning, wherein one branch is responsible for detecting human body joint points, and the other branch is responsible for clustering the joint points.
S1, inputting pedestrians in a video frame into a convolutional neural network to detect human body joint points, and outputting position information and direction information of the joint points of all pedestrians by the network; and subtracting absolute values of the joint point position information two by two to obtain several groups of equal coordinate information, marking the equal joint point position information as fixed joint points, and marking the rest joint point position information as non-fixed joint points.
S2, obtaining fixed joint points through the S1, and taking the fixed joint points as a human body posture preliminary frame. And then reasoning the non-fixed joint points belonging to the person through the human body posture preliminary frame. The joint points in the preliminary frame of the human body posture contain direction information, a group of limbs are formed through the direction information between the two joint points, then the joint points of the limbs and other joint points are provided with direction information, and the joint points are connected in a pairwise manner.
S3, obtaining a limb information group from the S2. The shared joint points among the limbs are utilized to connect the two limbs and even connect the two sections of joint parts, so that the preliminary frame of the human body posture finds the 'non-fixed joint points' belonging to the human body posture, and the human body posture estimation of the pedestrian is obtained.
And a top-down method is adopted to realize human body posture estimation. The idea of the human body posture estimation method is that the human body joint point position coordinates are detected first, and then the joint points of the human body are clustered, so that the posture (or behavior) of the human body is judged. The method comprises the following steps:
Firstly, a pedestrian with a frame size w.h (w represents the width of a picture and h represents the height of the picture) in a video frame is intercepted and input into a convolutional neural network, all joint points in an image are output, and the output joint point information contains position information and direction information. The bit information is as in formula (1):
l= (L 1,L2,…Ln)Ln∈Rw*h n e (1, 2 …, 18) formula (1)
Where L represents the position information of the node, L 1 represents the position information of the second node, n represents the number of nodes to be detected, 18 nodes in total, and R represents a real number.
The direction information of the articulation point is as shown in formula (2):
D= (D 1,D2,…Dc)Dc∈Rw*h c e (1, 2, …, 19) formula (2)
Where D represents the direction information of the articulation points and the articulation points, D 1 represents the direction information between the first pair of articulation points, D 2 represents the direction information between the second pair of articulation points, c represents the number of 18 articulation points, and the direction information between every two articulation points.
Coordinates of all the nodes of the human body are detected from the image, and absolute value calculation is carried out on every two coordinates. Such as the joints L i=(xi,yi) and L j=(xj,yj),Li and L j represent the i and j-th joint coordinates. The calculation formula (1) is as follows:
(x n,yn)=(|(xi-xj)|,|(yi-yj) I) formula (3)
The difference calculation of the coordinates of all the detected joint points can be calculated by the formula (3). In these results, at least 4 pairs of coordinates are identical (within the error range), namely the human body preliminary posture frame in the above. For example, the nose coordinates are L nose=(xnose,ynose), the left and right eyes coordinates are L eye=(xl,yl) and L eye=(xr,yr), (where L represents left and r represents right), then two equal coordinates are theoretically obtained after the calculation of formula (3). Since the joints of a person are symmetrical, they are equal with respect to the relative distance between a joint. Similarly, the coordinates of the left and right ears, the left and right shoulders, the left and right buttocks, and the nose are the same as those obtained by the formula (3). The same principle is that the coordinates of the neck, the left and right ears, the left and right shoulders, the left and right eyes and the left and right buttocks 4 pairs can be calculated by the formula (3) to obtain 4 groups of equal coordinates. While it is considered that other joints (e.g., wrist, knee, etc.) are relatively flexible, unlike the nose, eyes, shoulders, buttocks, which are relatively stationary. The distances between these relatively fixed joints should also be equal. Whereas the distance between the relatively flexible articulation point and the relatively fixed articulation point is uncertain in pedestrians, there is a change in distance caused by movement.
From equation (3) it can be deduced that the 4 sets of coordinates are equal. The two-dimensional model is calculated by left and right ears, left and right eyes, left and right shoulders, left and right buttocks, nose and neck. Then 4 groups thereof are grouped into one group and the rest into another group. In the form of a solution as described below,
The above equation (1) is referred to as a "fixed joint", and the equation (2) is referred to as a "non-fixed joint". Wherein, in the formula (1) (x 1,y1)T represents coordinates of one joint point of left and right ears, left and right eyes, left and right shoulders, and left and right buttocks, and the same holds (x 2,y2)T represents another coordinate of the eight joint points, which are combined to form a human body preliminary posture frame; in the formula (2) (x n,yn)T represents a joint point set waiting for the human body preliminary posture frame to find a joint point belonging to the own joint point, and n represents the number of detected joint points).
The main innovation of the present invention is to find "non-fixed nodes" through "fixed nodes". Wherein, the "fixed skeletal articulation point" includes the left and right eyes, the left and right ears, the left and right shoulders and the left and right buttocks; "non-stationary joint" includes the nose, left and right elbows, left and right wrists, left and right knees, left and right ankles, neck. Mainly, the positions of the fixed joints relative to the non-fixed joints in pedestrians are relatively unchanged, unlike the joints such as wrists, knees and the like, which change along with the movement of the pedestrians.
Considering that in complex pedestrian scenes, there may be problems with more than one joint being out of view, i.e. occluded by the joint. It is not realistic to obtain joint point information such as equation (1) and equation (2). Occlusion of the node will not facilitate positioning of the "fixed node". Like the left shoulder or left shoulder and left hip of a person is blocked, in which case it is difficult to study.
For the difficulty of complaints, the main idea of the solution is data enhancement. The first is to re-copy and feed the hard-to-detect joint points into the network, which helps training data to improve the learning ability of the network. The second is to insert the joint point coordinates manually, and for the "fixed joint point", it is possible to detect a position on the left and right shoulders or the left and right buttocks, and to insert the corresponding joint point coordinates manually according to symmetry. As shown in the figure 4, the joint point diagram is manually inserted, the joint point diagram of the right shoulder of the person is shielded, and the solution is to manually mark the joint point. And then adding the model to train, and enhancing data to obtain more accurate human body posture estimation.
The manual insertion of the node is performed according to symmetry, at which time an auxiliary node coordinate f= (x f,yf) is required to insert the symmetrical node (where F represents the node to be inserted, x f represents the abscissa, y f represents the ordinate, and F represents the identification of the auxiliary coordinate), the auxiliary node not belonging to the "fixed node". And calculating the value of the inserted node by using the equal distance between the two points. Wherein another known coordinate is k= (x know,yknow), and the inserted joint point coordinate is u= (x unknown,yknow) the following formula (4) is calculated, wherein K represents one joint point in the fixed joint point, x know represents the abscissa, y know represents the ordinate, and know represents the identification of the detected coordinate; u represents the coordinates of the joint point to be undetected, x unknown, and the abscissa of the joint point to be undetected.
And (3) calculating x unknown by the fact that the distances between the coordinate points of the formula (2) are equal in the formula (4) of the K-F= |U-F|. Wherein the coordinates of the inserted joint point are determined to be to the left or right of the known coordinates based on the relative sizes of x f and x know. When x known<xf, the inserted abscissa value x unknown is to the right; when x known>xf, the inserted abscissa value x unknown is illustrated to the left.
The 'non-fixed joint points' belonging to the formula (1) are searched through the 'fixed key points', so that the posture information of 18 joint points of the human body is further deduced. And (3) searching the joint points by utilizing the direction information between the joint points and the information of the shared joint points in the formula (2), and further inquiring the joint points belonging to the user. As shown in formula (3)
(X n,yn)T feature vector indicates that the human body posture information of the pedestrian is the static feature of the pedestrian; equation (3) indicates that the human body posture preliminary frame reasoning belongs to the self non-fixed joint points, and further deduces the 18 joint points of the self,Representing a set of 18 nodes.
However, dynamic information of pedestrians is obtained in the video. The adopted strategy is that the characteristic change on the pedestrian time sequence is intercepted by frames with an interval of one second, and the processing of each frame is as described above, so that the dynamic characteristic change of the pedestrian is obtained. Then, the absolute value of the difference value between the video frame behind the feature vector of 2 x 18 dimensions and the video frame in front is calculated, specifically as shown in the formula (5),
Wherein,Characteristic vector representing that coordinates of 18 joint points of human body are changed to 2 x 18Characteristic vector of 18 joint point coordinates representing ith frame in video instead of 2 x 18,/>The 18 joint point coordinates representing the j-th frame in the video are changed to feature vectors of 2 x 18.
Wherein (x i,yi)2*18 T represents the feature vector of the following video frame and (x j,yj)2*18 T represents the feature vector of the preceding video frame), the feature vector of the formula (3) is used as the pose estimation of the human body to be continuously optimized in the network model, so that the accuracy of the human body pose estimation is improved.
Compared with the prior art, the invention provides an idea of joint point clustering in human body posture estimation, which is to determine a preliminary frame of a human body, namely 'fixed joint points', and then presume 'non-fixed joint points' according to the 'fixed joint points', so that the posture or the behavior of the whole human body is detected. For the behavior of the pedestrian, the background monitoring is used for judging, and if the behavior is abnormal, a related alarm signal can be sent out to achieve an intelligent human body behavior analysis system. In the security field, can assist security personnel to accomplish work, improve the security quality to can 24 hour continuous control surrounding environment, avoid security personnel tired hidden danger that brings.
Drawings
Fig. 1 is a pose estimation model diagram.
Fig. 2 is a fixed joint point diagram.
FIG. 3 is a diagram of a non-stationary joint point.
Fig. 4 is a view of a prosthetic insertion joint.
Detailed Description
The invention will be described in detail with reference to the accompanying drawings. The human body posture is estimated, and the algorithm has an upper branch and a lower branch by means of the PAF algorithm idea, wherein one branch is responsible for detecting human bone joint points, and the other branch is responsible for clustering the joint points, as shown in figure 1.
S1, detecting joint points of pedestrians in a video to obtain position information and direction information of the joint points of all the pedestrians; and subtracting the position information of the joint points two by two, and taking an absolute value. This results in sets of equal coordinate information, with these equal position information being the "fixed node". As shown in fig. 2, the "fixed-joint" graph (black circles), the rest labeled "non-fixed-joint", falls into two categories. "unfixed node" graph (white circle), as in fig. 3.
S2, reasoning about 'non-fixed joint points' through the human body preliminary framework determined in the S1. According to the preliminary frame of the human body, the joint points have direction information, and the connection between every two parts is carried out through the direction information between the two joint points; in addition, the position and direction information of the shared joint point and other two joint points are utilized to connect two parts or even two joint parts.
S3, the preliminary frame of the human body finds a 'non-fixed joint point' belonging to the human body, and further the posture estimation of the pedestrian is obtained.

Claims (4)

1. A human body posture estimation analysis method of joint point classification and joint point reasoning is characterized in that: the method comprises two branches, wherein one branch is responsible for detecting the joint points of the human body, and the other branch is responsible for clustering the joint points; the method is divided into the following steps,
S1, inputting pedestrians in a video frame into a convolutional neural network to detect human body joint points, and outputting position information and direction information of the joint points of all pedestrians by the network; subtracting absolute values of the joint point position information two by two to obtain several groups of equal coordinate information, marking the equal joint point position information as fixed joint points and marking the rest joint point position information as non-fixed joint points;
S2, obtaining fixed joint points through the S1, and taking the fixed joint points as a human body posture preliminary frame; then reasoning the non-fixed joint points belonging to the person through a human body posture preliminary frame; the joint points in the primary frame of the human body posture contain direction information, a group of limbs are formed through the direction information between the two joint points, then the joint points of the limbs are provided with the direction information with other joint points, and the joint points are connected in a pairwise manner;
s3, obtaining a limb information set from the S2; the shared joint points among the limbs are utilized to connect the two limbs and even connect the two joint parts, so that a primary frame of the human body posture finds a 'non-fixed joint point' belonging to the primary frame, and the human body posture estimation of the pedestrian is obtained;
A top-down method is adopted to realize human body posture estimation; detecting the coordinates of the joint points of the human body, and then clustering the joint points of the human body so as to judge the posture or behavior of the human body; the method comprises the following steps:
Firstly, intercepting a pedestrian in a video frame, inputting the size of the pedestrian into a convolutional neural network, wherein w represents the width of a picture, h represents the height of the picture, outputting all joint points in the image, and the output joint point information contains position information and direction information; the position information of the node is as shown in formula (1):
L= (L 1,L2,…Ln)Ln∈Rw*h n e (1, 2 …, 18) formula (1)
Wherein L represents the position information of the joint points, L 1 represents the position information of the second joint point, n represents the number of the joint points to be detected, 18 joint points are taken as a whole, and R represents a real number;
The direction information of the articulation point is as shown in formula (2):
D= (D 1,D2,…Dc)Dc∈Rw*h c e (1, 2, …, 19) formula (2)
Where D represents the direction information of the articulation points and the articulation points, D 1 represents the direction information between the first pair of articulation points, D 2 represents the direction information between the second pair of articulation points, c represents the number of 18 articulation points, and the direction information between every two articulation points.
2. The human body posture estimation analysis method of joint point classification and joint point reasoning according to claim 1, characterized in that: detecting coordinates of all the joint points of the human body from the image, and calculating absolute values of every two coordinates; for example, joints L i=(xi,yi) and L j=(xj,yj),Li and L j represent the i and j-th joint coordinates; the calculation formula (1) is as follows:
(x n,yn)=(|(xi-xj)|,|(yi-yj) I) formula (3)
The difference value calculation of the coordinates of all the detected joint points can be calculated through the formula (3); in these results, at least 4 pairs of coordinates are identical, namely a human body preliminary posture frame;
The 4 sets of coordinates are deduced to be equal according to formula (3); the two kinds of human eyes are respectively calculated by left and right ears, left and right eyes, left and right shoulders, left and right buttocks, nose and neck; then group 4 is divided into one group and the remainder into another group; in the form of a solution as described below,
The formula (1) is a fixed joint point, and the formula (2) is a non-fixed joint point; wherein, in the formula (1) (x 1,y1)T represents the coordinates of one joint point of the left and right ears, the left and right eyes, the left and right shoulders, and the left and right buttocks, and the same holds (x 2,y2)T represents the other coordinates of the eight joint points, and the other coordinates are combined to form a human body preliminary posture frame; in the formula (2) (x n,yn)T represents the joint point set waiting for the human body preliminary posture frame to find the joint point belonging to the joint point, and n represents the number of detected joint points; thus, all the detected joint points are divided into two types, one is a "fixed joint point" called a human body posture preliminary frame, and the other is a "non-fixed joint point" called a joint point waiting for clustering).
3. The human body posture estimation analysis method of joint point classification and joint point reasoning according to claim 1, characterized in that: manually inserting the node according to symmetry, and requiring an auxiliary node coordinate f= (x f,yf) to insert the symmetric node, wherein F represents the node to be inserted, x f represents the abscissa, y f represents the ordinate, and F represents the identification of the auxiliary coordinate; the auxiliary articulation point does not belong to a "fixed articulation point"; calculating the value of the inserted node by using the equal distance between the two points; wherein another known coordinate is k= (x know,yknow), and the inserted joint point coordinate is u= (x unknown,yknow) the following formula (4) is calculated, wherein K represents one joint point in the fixed joint point, x know represents the abscissa, y know represents the ordinate, and know represents the identification of the detected coordinate; u represents the coordinates of the joint point to be undetected, x unknown, the abscissa of the joint point to be undetected;
I K-F I= |U-F| formula (4)
Calculating x unknown by the fact that the distances between coordinate points in the formula (2) are equal; wherein the coordinates of the inserted joint point are judged to be on the left or right of the known coordinates according to the relative sizes of x f and x know; when x known<xf, the inserted abscissa value x unknown is to the right; when x known>xf, the inserted abscissa value x unknown is to the left;
Searching for 'non-fixed joint points' belonging to the formula (1) through 'fixed key points', and further estimating the posture information of 18 joint points of a human body; the direction information between the joint points and the information of the shared joint points in the formula (2) are utilized to search the joint points, so that the joint points belonging to the user are searched; as shown in formula (3)
(X n,yn)T feature vectors represent that the human body posture information of the pedestrian is the static feature of the pedestrian, equation (3) represents that the human body posture preliminary frame reasoning belongs to the self non-fixed joint points, and further the 18 joint points are deduced,Representing a set of 18 nodes.
4. The human body posture estimation analysis method of joint point classification and joint point reasoning according to claim 1, characterized in that: obtaining dynamic information of pedestrians in the video, wherein the characteristic change on the time sequence of the pedestrians is intercepted by frames at intervals of one second, and the processing of each frame is as described above to obtain the dynamic characteristic change of the pedestrians; then, the absolute value of the difference value between the video frame behind the feature vector of 2 x 18 dimensions and the video frame in front is calculated, specifically as shown in the formula (5),
Wherein,Characteristic vector representing that coordinates of 18 joint points of human body are changed to 2 x 18Characteristic vector of 18 joint point coordinates representing ith frame in video instead of 2 x 18,/>The coordinates of the 18 joint points of the j-th frame in the video are changed into feature vectors of 2 x 18;
Where (x i,yi)2*18 T represents the feature vector of the following video frame and (x j,yj)2*18 T represents the feature vector of the preceding video frame).
CN202110338088.8A 2021-03-30 2021-03-30 Human body posture estimation analysis method for joint point classification and joint point reasoning Active CN112990060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338088.8A CN112990060B (en) 2021-03-30 2021-03-30 Human body posture estimation analysis method for joint point classification and joint point reasoning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338088.8A CN112990060B (en) 2021-03-30 2021-03-30 Human body posture estimation analysis method for joint point classification and joint point reasoning

Publications (2)

Publication Number Publication Date
CN112990060A CN112990060A (en) 2021-06-18
CN112990060B true CN112990060B (en) 2024-05-28

Family

ID=76338061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338088.8A Active CN112990060B (en) 2021-03-30 2021-03-30 Human body posture estimation analysis method for joint point classification and joint point reasoning

Country Status (1)

Country Link
CN (1) CN112990060B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633624A (en) * 2019-07-26 2019-12-31 北京工业大学 A machine vision human abnormal behavior recognition method based on multi-feature fusion
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111274954A (en) * 2020-01-20 2020-06-12 河北工业大学 Embedded platform real-time falling detection method based on improved attitude estimation algorithm
CN111611912A (en) * 2020-05-19 2020-09-01 北京交通大学 A detection method for abnormal head bowing behavior of pedestrians based on human joint points
CN111950412A (en) * 2020-07-31 2020-11-17 陕西师范大学 A Hierarchical Dance Movement Pose Estimation Method Based on Sequence Multi-scale Deep Feature Fusion
CN112329712A (en) * 2020-11-24 2021-02-05 上海海事大学 2D multi-person posture estimation method combining face detection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670380B (en) * 2017-10-13 2022-12-27 华为技术有限公司 Motion recognition and posture estimation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633624A (en) * 2019-07-26 2019-12-31 北京工业大学 A machine vision human abnormal behavior recognition method based on multi-feature fusion
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111274954A (en) * 2020-01-20 2020-06-12 河北工业大学 Embedded platform real-time falling detection method based on improved attitude estimation algorithm
CN111611912A (en) * 2020-05-19 2020-09-01 北京交通大学 A detection method for abnormal head bowing behavior of pedestrians based on human joint points
CN111950412A (en) * 2020-07-31 2020-11-17 陕西师范大学 A Hierarchical Dance Movement Pose Estimation Method Based on Sequence Multi-scale Deep Feature Fusion
CN112329712A (en) * 2020-11-24 2021-02-05 上海海事大学 2D multi-person posture estimation method combining face detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于姿态时空特征的人体行为识别方法;郑潇;彭晓东;王嘉璇;;计算机辅助设计与图形学学报;20180915(第09期);全文 *
基于深度图像的实时多人体姿态估计;肖贤鹏;刘理想;胡莉;张华;;传感器与微系统;20200602(第06期);全文 *

Also Published As

Publication number Publication date
CN112990060A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN106611157B (en) A kind of more people&#39;s gesture recognition methods detected based on light stream positioning and sliding window
Kamal et al. A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
WO2017206005A1 (en) System for recognizing postures of multiple people employing optical flow detection and body part model
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
JP7422456B2 (en) Image processing device, image processing method and program
CN106203503A (en) A kind of action identification method based on skeleton sequence
CN113255514B (en) Behavior identification method based on local scene perception graph convolutional network
Ali et al. Deep Learning Algorithms for Human Fighting Action Recognition.
CN114170686A (en) Elbow bending behavior detection method based on human body key points
CN114663835A (en) A pedestrian tracking method, system, device and storage medium
Amaliya et al. Study on hand keypoint framework for sign language recognition
Wei et al. Object clustering with Dirichlet process mixture model for data association in monocular SLAM
Feng Mask RCNN-based single shot multibox detector for gesture recognition in physical education
Zahoor et al. Remote sensing surveillance using multilevel feature fusion and deep neural network
WO2020013395A1 (en) System for tracking object in video image
Rodríguez-Moreno et al. A new approach for video action recognition: Csp-based filtering for video to image transformation
CN112990060B (en) Human body posture estimation analysis method for joint point classification and joint point reasoning
CN113327267A (en) Action evaluation method based on monocular RGB video
Jessika et al. A study on part affinity fields implementation for human pose estimation with deep neural network
CN117115594A (en) Single-purpose 3D extravehicular spacesuit attitude estimation method
CN116152861A (en) Multi-person behavior detection method based on monocular depth estimation
Rodríguez-Moreno et al. Sign language recognition by means of common spatial patterns
WO2020016963A1 (en) Information processing device, control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant