CN114067358A - Human body posture recognition method and system based on key point detection technology - Google Patents
Human body posture recognition method and system based on key point detection technology Download PDFInfo
- Publication number
- CN114067358A CN114067358A CN202111287237.9A CN202111287237A CN114067358A CN 114067358 A CN114067358 A CN 114067358A CN 202111287237 A CN202111287237 A CN 202111287237A CN 114067358 A CN114067358 A CN 114067358A
- Authority
- CN
- China
- Prior art keywords
- key point
- human body
- key points
- posture
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000005516 engineering process Methods 0.000 title claims abstract description 45
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000036544 posture Effects 0.000 claims description 67
- 230000006399 behavior Effects 0.000 claims description 41
- 238000004422 calculation algorithm Methods 0.000 claims description 34
- 238000013528 artificial neural network Methods 0.000 claims description 20
- 230000005021 gait Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 17
- 150000001875 compounds Chemical class 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 3
- 238000009792 diffusion process Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 210000000988 bone and bone Anatomy 0.000 abstract description 2
- 210000003414 extremity Anatomy 0.000 description 49
- 238000012544 monitoring process Methods 0.000 description 7
- 210000000689 upper leg Anatomy 0.000 description 6
- 210000003423 ankle Anatomy 0.000 description 5
- 210000003127 knee Anatomy 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 244000309466 calf Species 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000001624 hip Anatomy 0.000 description 2
- 210000001503 joint Anatomy 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human body posture identification method and a system based on a key point detection technology, which comprises the following steps: (1) acquiring continuous frame images through video data and preprocessing the continuous frame images; (2) detecting key points of human bones of the obtained image; (3) setting a preset value, and determining whether to convert coordinates or not by comparing the number of the detected key points of the human body with the preset value to identify the posture; (4) acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points; (5) calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the posture is enough, a relative position relation judgment method of the limbs is adopted for posture recognition. The invention can recognize the gesture according to the content of the video image.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a human body posture recognition method and system based on a key point detection technology.
Background
Human body posture detection is one of the most challenging directions in the field of computer vision, and is widely applied to the fields of security monitoring of key areas, behavior detection of public places, behavior supervision of working places, behavior detection of drivers, safety monitoring of solitary old people, safety management of police departments, and the like. At present, the detection of human behaviors mostly adopts the forms of field attendance of supervisory personnel and video picture monitoring, and as the monitoring equipment is large in quantity, various data types are multiple, and monitoring objects are increasingly complex, the traditional behavior monitoring mode has the following problems: on one hand, related supervision personnel need to monitor too much video picture information, supervision energy is insufficient, and working efficiency is low; on the other hand, human judgment and supervision have certain subjective factors, and the standardization is difficult to review and realize at a later stage. In the prior art, in the process of performing gesture recognition detection by using a key point technology, video content is often directly recognized, and the number of recognizable key points or whether feature information expressed by the key points is enough are not considered, so that the result of gesture recognition is inaccurate, even gesture recognition cannot be performed, and the detection efficiency is not high.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a human body posture recognition method and system based on a key point detection technology, which can autonomously select a posture recognition algorithm according to the content of a video image and improve the efficiency of human body posture recognition.
The technical scheme is as follows: the invention relates to a human body posture identification method based on a key point detection technology, which comprises the following steps of:
(1) acquiring continuous frame images through video data and preprocessing the continuous frame images;
(2) detecting key points of the human skeleton of the obtained image, obtaining two-dimensional image coordinates of each key point and associating the key points with the human body;
(3) setting a preset value, judging whether the number of the detected human body key points exceeds the preset value or not, and if not, not performing gesture recognition; if the two-dimensional image coordinate exceeds the preset value, converting the two-dimensional image coordinate of each key point into a three-dimensional coordinate;
(4) acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristics comprise the characteristic limb distance between the key points, the limb included angle and the distance difference of the key points in each direction of the x axis, the y axis and the z axis;
(5) calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; if the accumulated weight is smaller than the preset threshold value, the human body key point information representing the gesture is insufficient, and the step (6) is executed;
the method for determining the relative position relationship of the limbs comprises the following steps:
(51) acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of the relative position characteristics of the limbs acquired at different postures as a set threshold;
(52) comparing the relative position characteristics of the limbs acquired in the step (4) with a set threshold value, and if the relative position characteristics of the limbs are within the range of the set threshold value, corresponding to a preset posture; otherwise, the gesture is not performed;
has the advantages that: compared with the prior art, the invention has the advantages that: and judging whether the number of all the acquired human body key points representing the movement is enough or not through the setting of the preset value and the key point confidence degree accumulation weight, extracting the relative position characteristics between the limbs to identify the posture on the basis of enough acquired key point information, and comparing the posture with the relative position characteristics of the preset posture to finally identify the posture.
Further, the method comprises the step (6) of adopting a neural network gesture recognition technology to recognize gestures if the accumulated weight is smaller than a preset threshold value and the human key point information representing the gestures is insufficient, and outputting a detection result; the neural network posture recognition technology in the step (6) specifically comprises the following steps:
(61) extracting the characteristics of the area, the perimeter, the aspect ratio and the eccentricity of the human body target contour by adopting an image processing technology;
(62) combining and normalizing the target contour features and the relative position features of the limbs mapped by the key points to form a feature vector to form a training mode library;
(63) and (3) constructing a neural network posture classifier to recognize the posture through training of the neural network model.
Through setting up the comparison of accumulating weight and presetting the threshold value, and then judge the key point information detection condition, adopt which kind of algorithm to discern the human gesture through the autonomous switching of detection condition, and then improved detection efficiency. Aiming at the condition that individual key points cannot be detected due to poor shooting angles, the contour edge features of the target are extracted through an image processing technology, combined with the relative position features of the key points and then normalized, the neural network is adopted to realize gesture recognition, and the recognition accuracy is improved.
Further, the method also comprises the following steps: (7) inputting the detection result of the gesture recognition into an unconventional behavior judgment module; if the person is judged to be out of compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with out-of-compliance behavior in the image by adopting a gait recognition technology; the input of the non-compliance behavior judgment module is a gesture, the output is a result of whether compliance is achieved, and a user sets part of gestures as non-compliance through setting parameters. The identity information of the people who do not comply with the behaviors is verified through the gait recognition algorithm, and the problem that the identity is difficult to verify through face recognition due to the shooting angle or the shooting distance is solved.
Further, the method also comprises the following steps: (8) judging the position of the person who does not comply with the regulations by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
Further, the cumulative weight of the confidence degrees of the key points calculated in the step (5) is:
where E is the confidence of the keypoint, E ═ E1,E2,…,EJ) Q is the weight of the key point, Q ═ Q (Q)1,Q2,…,QJ) And J represents the number of key points to be detected.
Further, the predetermined value in step (3) is 4.
Further, the human skeleton key point detection in the step (2) adopts a key point detection algorithm from bottom to top; the method specifically comprises the following steps:
(2.1) construction of a double-branch convolutional neural network
Inputting the preprocessed pictures into a double branch containing 16 convolution layers and 3 full-connection layersThe deep convolutional neural network VGG-19, wherein the first 10 layers of the neural network are used for creating feature mapping for an input image to obtain a feature F; inputting F into two branches respectively, wherein the first branch is used for predicting a key point confidence map S, and the second branch is used for predicting a key point affinity field L; wherein s ═ s1,s2,…,SJ) J represents the number of key points to be detected; l ═ L (L)1,L2,…,LC) C represents the number of pairs of joints to be detected; the inputs to each stage network are:
in the formula, St,LtDenotes the result of the t-th round of training, ptRepresenting the t-th round of confidence training process,representing the t round affinity training process;
(2.2) Key Point confidence map prediction
in the formula (I), the compound is shown in the specification,a confidence level that the jth keypoint representing the kth individual exists at the p pixel; x is the number ofj,kA jth keypoint representing a kth individual; σ is used to control the degree of diffusion of the gaussian distribution; setting a confidence threshold of the key point, and if the confidence of the key point exceeds the threshold, reserving the key point; of the whole human bodyThe confidence coefficient is the maximum value of the confidence coefficients of all the components of the human body, and is as follows:
(2.3) Key Point affinity field prediction
The key point affinity fields are then:
in the formula (I), the compound is shown in the specification,whether a certain pixel point p exists on the c-th pairwise connected joint of the kth person is represented;represents the position j1Pointing to position j2The unit vector of (a) is,are respectively j1、j2The true coordinates of (2); if the following conditions are met, judging that the pixel point p is on the limb formed by the joint pair connection;
in the formula Ic,kThe length of the limb is formed by connecting the c-th joint pair of the k-th person in pairs, sigmalIs the width of the limb;
(2.4) Key Point clustering
Performing bipartite graph matching by using the Hungarian algorithm with the maximum edge weight to obtain an optimal multi-person key point connection result, and enabling each key point to correspond to different persons;
the objective of the hungarian algorithm is to find the combination of edge weights and maximum among the C pairwise connected joint sets, as follows:
in the formula, EmnEdge weights for the mth and nth keypoint types, DJFor a set of J type key points,used for judging whether the two key points are connected or not; for any two keypoint locationsAndthe correlation of key point pairs was characterized by calculating the integral of the affinity field as follows:
in the formula (I), the compound is shown in the specification, is a key point j1,j2C is the key point j1,j2Connected limbs, p (u) being samples at key points, Lc(p (u)) is the predicted PAF value for limb C at point p (u).
In addition, the invention also provides a human body posture recognition system based on the key point detection technology, which comprises the following components:
the image and processing module is used for acquiring continuous frame images through video data and preprocessing the continuous frame images;
the key point detection module is used for detecting the key points of the skeleton of the human body of the acquired image, acquiring the coordinates of the two-dimensional image of each key point and associating the key points with the human body; the gesture recognition device is used for setting a preset value, judging whether the number of the detected human body key points exceeds the preset value or not, and if not, not carrying out gesture recognition; if the two-dimensional image coordinate exceeds the preset value, converting the two-dimensional image coordinate of each key point into a three-dimensional coordinate;
the gesture recognition module is used for acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristics comprise the limb distance represented between the key points, the limb included angle and the distance difference of the key points in each direction of the x axis, the y axis and the z axis; calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; if the accumulated weight is smaller than a preset threshold value, indicating that the human key point information representing the gesture is insufficient, adopting a neural network gesture recognition technology to perform gesture recognition, and outputting a detection result; the method for determining the relative positional relationship of the limbs includes: acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of the relative position characteristics of the limbs acquired at different postures as a set threshold; comparing the acquired relative position characteristics of the limbs with a set threshold value, and if the acquired relative position characteristics of the limbs are within the range of the set threshold value, corresponding to a preset posture; otherwise, the gesture is not performed;
the identity verification module is used for inputting the detection result of the gesture recognition into the non-compliance behavior judgment module; if the person is judged to be out of compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with out-of-compliance behavior in the image by adopting a gait recognition technology; the input of the non-compliance behavior judgment module is a gesture, the output is a result of whether compliance is achieved, and a user sets part of gestures as non-compliance through setting parameters;
the position tracking module is used for judging the positions of the persons who do not comply with the regulations by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
The invention also provides a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the above-mentioned method steps.
The invention also provides human body posture recognition debugging equipment, a memory, a processor and a program which is stored and can be run on the memory, wherein the program realizes the steps of the method when being executed by the processor.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a distribution diagram of key points of a human body in the method of the present invention;
FIG. 3 is a flow chart of determining whether the content of the key point information is sufficient in the method of the present invention;
FIG. 4 is a flow chart of gesture recognition based on the relative position relationship of the limbs in the method of the present invention;
FIG. 5 is a schematic front view of the squatting position;
FIG. 6 is a schematic side view of the squat position;
FIG. 7 is a flow chart of a neural network gesture recognition algorithm in the method of the present invention;
FIG. 8 is a flow chart of a person identity verification algorithm performed by gait recognition technology in the method of the present invention;
FIG. 9 is a three-dimensional visual positioning model;
FIG. 10 is a flow chart of the target three-dimensional coordinate calculation;
fig. 11 is a flowchart of the same behavior detection of the same person.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the method for recognizing human body posture based on the key point detection technology includes the following steps:
step one, logging in a stereoscopic vision camera through software, acquiring a video stream in a channel, extracting frames, decoding, and performing format conversion to finally obtain continuous frame images in an RGB format; and image enhancement and denoising are carried out on the continuous frame images, so that the image quality is improved. Considering that in an actual application scene, an image acquired by a camera contains more non-target objects, in order to improve the detection efficiency and accuracy, the image is cut according to a monitoring range and is zoomed to a specified size;
secondly, detecting key points of human bones of the obtained image by adopting a human key point detection algorithm from bottom to top, obtaining two-dimensional image coordinates of each key point and associating the key points with a human body; the human skeleton key point detection algorithm comprises the following steps:
(1) constructing a double-branch convolutional neural network
Inputting the preprocessed picture into a double-branch deep convolutional neural network VGG-19 comprising 16 convolutional layers and 3 full-connection layers, wherein the size of a convolutional kernel is 3 multiplied by 3, an activation function adopts ReLU, and a Dropout mechanism is adopted in training. The first 10 layers of the neural network are used to create a feature map for the input image, resulting in feature F. F is input into two branches, the first branch is used to predict the keypoint confidence map S and the second branch is used to predict the keypoint affinity field L. Wherein S ═ S1,S2,…,SJ) J represents the number of key points to be detected; l ═ L (L)1,L2,…,LC) And C represents the number of pairs of joints to be detected. The inputs to each stage network are:
in the formula, St,LtDenotes the result of the t-th round of training, ptRepresenting the t-th round of confidence training process,representing the t-th round of affinity training process.
(2) Keypoint confidence map prediction
Key point confidence mapThe confidence that the jth keypoint representing the kth individual exists at the p pixel. The calculation method is as follows:
in the formula, xj,kThe jth keypoint, denoted the kth individual, σ is used to control the degree of diffusion of the gaussian distribution. And setting a confidence threshold of the key point to be 0.5, and if the confidence of the key point exceeds the threshold, reserving the key point.
The confidence of the whole human body is the maximum value of the confidence of all the components of the human body, and the confidence is as follows:
(3) keypoint affinity field prediction
Using key affinity fieldsWhether a certain pixel point p exists on the joint of the kth person, which is connected in pairs, is represented as follows:
in the formula (I), the compound is shown in the specification,represents the position j1Pointing to position j2A unit vector of (a); are respectively j1、j2The true coordinates of (2);
if the following conditions are met, judging that the pixel point p is on the limb formed by the joint pair connection;
in the formula Ic,kThe length of the limb is formed by connecting the c-th joint pair of the k-th person in pairs, sigmalThe width of the limb.
(4) Key point clustering
And performing bipartite graph matching by using the Hungarian algorithm with the maximum edge weight to obtain an optimal multi-person key point connection result, and corresponding each key point to different persons.
The objective of the hungarian algorithm is to find the combination of edge weights and maximum among the C pairwise connected joint sets, as follows:
in the formula, EmnEdge weights for the mth and nth keypoint types, DJFor a set of J type key points,for judging whether two key points are connected.
For any two keypoint locationsAndthe correlation of key point pairs was characterized by calculating the integral of the affinity field as follows:
in the formula (I), the compound is shown in the specification, is a key point j1,j2C is the key point j1,j2Connected limbs, p (u) being samples at key points, Lc(p (u)) is the predicted PAF value for limb C at point p (u).
In order to improve the calculation efficiency, the correlation between every two key points is approximated by sampling 10 pixel points at equal intervals to carry out integration and then summation. Through the above steps, as shown in fig. 2, 18 pieces of human body key point two-dimensional coordinate information of a single person or a plurality of persons in the image, such as the nose, the neck, the left shoulder, the left elbow, the left wrist, the right shoulder, the right elbow, the right wrist, the left crotch, the left knee, the left ankle, the right crotch, the right knee, the right ankle, the left eye, the right eye, the left ear, the right ear, and the like, can be obtained.
Setting the preset value to be 4, judging whether the number of the detected key points of the human body exceeds 4, and if the number of the detected key points does not exceed 4, not performing gesture recognition; if the number of the detected key points exceeds 4, converting the two-dimensional image coordinates of the acquired key points into three-dimensional coordinates;
acquiring relative position characteristic values of the limbs mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristic values comprise characteristic limb distances among the key points, limb included angles and distance differences of the key points in the directions of x, y and z;
for example, the left crotch coordinate is (x)1,y1,z1) The left knee coordinate is (x)2,y2,z2) The left ankle coordinate is (x)3,y3,z3) The left thigh limb distance is | (x)1,y1,z1)-(x2,y2,z2)‖2The distance between the left leg and the limb is | (x)3,y3,z3)-(x2,y2,z2)‖2The angle between the left thigh and the left shank isThe distance difference from the left hip joint to the left knee joint in each direction of x, y and z is x1-x2,y1-y2,z1-z2. And in order to enhance the robustness and the practicability of the system, carrying out data normalization processing on the calculated distance and included angle.
And step five, calculating the confidence coefficient accumulation weight of the key points, and using the confidence coefficient accumulation weight as an index for judging whether the key point information is enough or not. According to the confidence coefficient of each joint point and by combining the weight values of each key point set by each preset posture, as shown in table 1, the formula for calculating the accumulated weight is as follows:
where E is the confidence of the keypoint, E ═ E1,E2,…,EJ) Q is the key point weight Q ═ Q (Q)1,Q2,…,QJ) And J represents the number of key points to be detected.
TABLE 1 weight of each pose partial key point
As shown in fig. 3, comparing the calculated accumulated weight with a preset threshold value, and determining whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; if the accumulated weight is smaller than a preset threshold value, indicating that the human key point information representing the gesture is insufficient, adopting a neural network gesture recognition technology to perform gesture recognition, and outputting a detection result; different gestures have different preset thresholds.
As shown in fig. 4, the method for determining the relative positional relationship of the limbs includes the steps of:
(a) acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of relative position characteristic values of the limbs acquired at different postures as a set threshold;
(b) comparing the relative position characteristic value of the limb obtained in the step (4) with a set threshold value, and if the relative position characteristic value is within the set threshold value range, corresponding to a preset posture; otherwise not the gesture.
As shown in fig. 5 to 6, for the detection and identification of the squat action, the included angle α between the left calf and the left thigh, the included angle δ between the right calf and the right thigh, the included angle β between the torso and the left thigh, and the included angle γ between the torso and the right thigh are calculated according to the coordinate information of the key points such as the torso, the left knee, the left ankle, the left waist, the right knee, the right ankle, the right waist, and the like, and if α <100 ° < n β <90 ° < n γ <90 °, the squat action is determined.
As shown in fig. 7, the neural network gesture recognition technology specifically includes the following steps:
(a) extracting features such as area, perimeter, aspect ratio, eccentricity and the like of a human target contour by adopting image processing technologies such as Gaussian modeling, foreground detection, connected domain analysis, morphological processing and the like;
(b) combining and normalizing the relative position features of the target contour features and the key points to form a training mode library as feature vectors;
(c) and (3) constructing a neural network posture classifier to recognize the posture through training of the neural network model.
The gesture recognition also needs to extract the characteristics of the area, the perimeter, the aspect ratio, the eccentricity and the like of the human body target contour, combine and normalize the target contour characteristics and the relative position characteristics of the key points, recognize the gesture by using a neural network gesture classifier and output a detection result;
the neural network train well in advance, seven layers of network structures based on keras is built, wherein the first three-layer is relu active layer, the middle three-layer is the BatchNormalization layer, the last layer is softmax output layer. softmax maps the outputs of a plurality of neurons into the (0, 1) interval, with the expression:
wherein V is an array of various action result values output by the network, and VgIs the value of behavior g, g ∈ {0,1, …, K-1}, K is the number of behaviors, SgThe probability value of the g-th behavior is shown, and e is the base of the natural logarithm.
Inputting the detection result of the posture recognition into a non-compliance behavior judgment module; if the person is judged to be out of compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with out-of-compliance behavior in the image by adopting a gait recognition technology; the program of the non-compliance behavior judgment module comprises a function, the input is the gesture, the output is the result of whether compliance is achieved, and a user can set the gesture as non-compliance according to different scenes by setting parameters in the program.
As shown in fig. 8, the gait recognition technique has the following steps:
(a) acquiring a gait profile diagram of a human body walking at different walking speeds;
(b) acquiring a lower limb angle change track of a gait cycle, contour width, perimeter, area and the like as gait characteristics, constructing a support vector machine gait recognition model base, training a model, and finally obtaining a vector machine gait classifier;
(c) and (4) adopting a trained support vector machine classifier to realize identity verification aiming at the personnel with non-compliant behaviors in the obtained image.
Seventhly, judging the positions of the persons who do not conform to the behaviors by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
As shown in fig. 9 to 10, image processing technologies such as gaussian modeling, foreground detection, connected domain analysis, morphological processing, template matching and the like are used to extract centroid pixel coordinates of human target connected domains in images shot by a left camera and a right camera, a coordinate system is established by taking an image center as an origin point in combination with a physical length corresponding to a unit pixel, the pixel coordinates of key points are converted into image coordinates, a coordinate system is established by taking a midpoint of a connecting line of optical centers of the left camera and the right camera as the origin point according to a parallax principle and a triangulation principle, and coordinates of the key points in a three-dimensional space are calculated, so that position information is obtained.
As shown in fig. 11, if the posture and position of the target are found in the nth frame image, the posture and position of the target in the (N + 1) th frame are determined through a target tracking algorithm, whether the posture changes or not is judged in real time, and if the posture changes, the picture is stored and uploaded to a background management system, so that the posture of the same person is tracked in real time; if the change does not occur and the position change is not judged, the time for keeping the posture is judged to be compared with a set threshold value, and if the change exceeds the set threshold value, the time is uploaded to a background management system, so that the real-time statistics of the same person and the same behavior is realized.
In addition, the invention also provides a human body posture recognition system based on the key point detection technology, which comprises the following components:
the image and processing module is used for acquiring continuous frame images through video data and preprocessing the continuous frame images;
the key point detection module is used for detecting the key points of the skeleton of the human body of the acquired image, acquiring the coordinates of the two-dimensional image of each key point and associating the key points with the human body; the gesture recognition device is used for setting a preset value, judging whether the number of the detected human body key points exceeds the preset value or not, and if not, not carrying out gesture recognition; if the two-dimensional image coordinate exceeds the preset value, converting the two-dimensional image coordinate of each key point into a three-dimensional coordinate;
the gesture recognition module is used for acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristics comprise the limb distance represented between the key points, the limb included angle and the distance difference of the key points in each direction of the x axis, the y axis and the z axis; calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; if the accumulated weight is smaller than a preset threshold value, indicating that the human key point information representing the gesture is insufficient, adopting a neural network gesture recognition technology to perform gesture recognition, and outputting a detection result; the method for determining the relative positional relationship of the limbs includes: acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of relative position characteristic values of the limbs acquired at different postures as a set threshold; comparing the acquired relative position characteristic value of the limb with a set threshold value, and if the acquired relative position characteristic value of the limb is within the range of the set threshold value, corresponding to a preset posture; otherwise, the gesture is not performed;
the identity verification module is used for inputting the detection result of the gesture recognition into the non-compliance behavior judgment module; if the person is judged to be out of compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with out-of-compliance behavior in the image by adopting a gait recognition technology; the input of the non-compliance behavior judgment module is a gesture, the output is a result of whether compliance is achieved, and a user sets part of gestures as non-compliance through setting parameters;
the position tracking module is used for judging the positions of the persons who do not comply with the regulations by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
The invention also provides a computer-readable storage medium having stored thereon a computer program, which, when being executed by a processor, carries out the steps of the method according to the invention.
The invention also provides human body posture recognition debugging equipment, a memory, a processor and a program which is stored and can be run on the memory, wherein the program realizes the steps of the method when being executed by the processor.
Claims (10)
1. A human body posture identification method based on a key point detection technology is characterized by comprising the following steps:
(1) acquiring continuous frame images through video data and preprocessing the continuous frame images;
(2) detecting key points of the human skeleton of the obtained image, obtaining two-dimensional image coordinates of each key point and associating the key points with the human body;
(3) setting a preset value, judging whether the number of the detected human body key points exceeds the preset value or not, and if not, not performing gesture recognition; if the two-dimensional image coordinate exceeds the preset value, converting the two-dimensional image coordinate of each key point into a three-dimensional coordinate;
(4) acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristics comprise the limb distance represented between the key points, the limb included angle and the distance difference of the key points in the directions of x, y and z;
(5) calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; the method for determining the relative position relationship of the limbs comprises the following steps:
(51) acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of the relative position characteristics of the limbs acquired at different postures as a set threshold;
(52) comparing the relative position characteristics of the limbs acquired in the step (4) with a set threshold value, and if the relative position characteristics of the limbs are within the range of the set threshold value, corresponding to a preset posture; otherwise not the gesture.
2. The human body posture identification method based on the key point detection technology as claimed in claim 1, characterized by further comprising the step (6) of performing posture identification by adopting a neural network posture identification technology if the accumulated weight is less than a preset threshold value, which indicates that the human body key point information representing the posture is insufficient, and outputting a detection result; the neural network posture recognition technology in the step (6) specifically comprises the following steps:
(61) extracting the characteristics of the area, the perimeter, the aspect ratio and the eccentricity of the human body target contour by adopting an image processing technology;
(62) combining and normalizing the target contour features and the relative position features of the limbs mapped by the key points to form a feature vector to form a training mode library;
(63) and (3) constructing a neural network posture classifier to recognize the posture through training of the neural network model.
3. The method for recognizing the human body posture based on the key point detection technology according to claim 2, characterized by further comprising the steps of:
(7) inputting the detection result of the gesture recognition into an unconventional behavior judgment module; and if the person is judged to be not in compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with the non-compliant behavior in the image by adopting a gait recognition technology.
4. The method for recognizing the human body posture based on the key point detection technology as claimed in claim 3, further comprising the steps of:
(8) judging the position of the person who does not comply with the regulations by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
5. The method for recognizing human body posture based on the key point detection technology as claimed in claim 1, wherein the calculating of the key point confidence degree cumulative weight in the step (5) is as follows:
where E is the confidence of the keypoint, E ═ E1,E2,…,EJ) Q is the weight of the key point, Q ═ Q (Q)1,Q2,…,QJ) And J represents the number of key points to be detected.
6. The method for recognizing the human body posture based on the key point detection technology as claimed in claim 1, wherein the predetermined value in the step (3) is 4.
7. The method for recognizing human body posture based on the key point detection technology as claimed in claim 1, wherein the human body skeleton key point detection of step (2) is a bottom-up key point detection algorithm; the method specifically comprises the following steps:
(2.1) construction of a double-branch convolutional neural network
Inputting the preprocessed picture into a double-branch depth convolution neural network VGG-19 comprising 16 convolution layers and 3 full-connection layers, wherein the first 10 layers of the neural network are used for creating feature mapping for an input image to obtain a feature F; inputting F into two branches respectively, wherein the first branch is used for predicting a key point confidence map S, and the second branch is used for predicting a key point affinity field L; wherein S ═ S1,S2,…,SJ) J represents the number of key points to be detected; l ═ L (L)1,L2,…,LC) C represents the number of pairs of joints to be detected; the inputs to each stage network are:
in the formula, St,LtDenotes the result of the t-th round of training, ptRepresenting the t-th round of confidence training process,representing the t round affinity training process;
(2.2) Key Point confidence map prediction
in the formula (I), the compound is shown in the specification,a confidence level that the jth keypoint representing the kth individual exists at the p pixel; x is the number ofj,kA jth keypoint representing a kth individual; σ is used to control the degree of diffusion of the gaussian distribution; setting a confidence threshold of the key point, and if the confidence of the key point exceeds the threshold, reserving the key point; the confidence of the whole human body is the maximum value of the confidence of all the components of the human body, and the confidence is as follows:
(2.3) Key Point affinity field prediction
The key point affinity fields are then:
in the formula (I), the compound is shown in the specification,whether a certain pixel point p exists on the c-th pairwise connected joint of the kth person is represented;represents the position j1Pointing to position j2The unit vector of (a) is,are respectively j1、j2The true coordinates of (2); if the following conditions are met, judging that the pixel point p is on the limb formed by the joint pair connection;
in the formula Ic,kThe length of the limb is formed by connecting the c-th joint pair of the k-th person in pairs, sigmalIs the width of the limb;
(2.4) Key Point clustering
Performing bipartite graph matching by using the Hungarian algorithm with the maximum edge weight to obtain an optimal multi-person key point connection result, and enabling each key point to correspond to different persons;
the objective of the hungarian algorithm is to find the combination of edge weights and maximum among the C pairwise connected joint sets, as follows:
in the formula, EmnEdge weights for the mth and nth keypoint types, DJFor a set of J type key points,used for judging whether the two key points are connected or not; for any two keypoint locationsAndthe correlation of key point pairs was characterized by calculating the integral of the affinity field as follows:
8. A human body posture recognition system based on a key point detection technology is characterized by comprising:
the image and processing module is used for acquiring continuous frame images through video data and preprocessing the continuous frame images;
the key point detection module is used for detecting the key points of the skeleton of the human body of the acquired image, acquiring the coordinates of the two-dimensional image of each key point and associating the key points with the human body; the gesture recognition device is used for setting a preset value, judging whether the number of the detected human body key points exceeds the preset value or not, and if not, not carrying out gesture recognition; if the two-dimensional image coordinate exceeds the preset value, converting the two-dimensional image coordinate of each key point into a three-dimensional coordinate;
the gesture recognition module is used for acquiring relative position characteristics of the limb mapped by the key points based on the three-dimensional coordinates of the key points, wherein the relative position characteristics comprise the limb distance represented between the key points, the limb included angle and the distance difference of the key points in the x, y and z directions; calculating the confidence coefficient accumulated weight of the key points, comparing the accumulated weight with a preset threshold value, and judging whether the detected human body key point information representing the posture is enough; if the accumulated weight is larger than a preset threshold value, indicating that the human body key point information representing the gesture is enough, performing gesture recognition by using a relative position relation judgment method of limbs, and outputting a detection result; if the accumulated weight is smaller than a preset threshold value, indicating that the human key point information representing the gesture is insufficient, adopting a neural network gesture recognition technology to perform gesture recognition, and outputting a detection result; the method for determining the relative positional relationship of the limbs includes: acquiring relative position characteristics of limbs under various preset human body postures, and taking the range of the relative position characteristics of the limbs acquired at different postures as a set threshold; comparing the acquired relative position characteristics of the limbs with a set threshold value, and if the acquired relative position characteristics of the limbs are within the range of the set threshold value, corresponding to a preset posture; otherwise, the gesture is not performed;
the identity verification module is used for inputting the detection result of the gesture recognition into the non-compliance behavior judgment module; if the person is judged to be out of compliance, acquiring gait information of the target in the image by combining a target tracking algorithm, and verifying the identity of the person with out-of-compliance behavior in the image by adopting a gait recognition technology;
the position tracking module is used for judging the positions of the persons who do not comply with the regulations by adopting a stereoscopic vision technical algorithm and an image processing technology; and (4) tracking the position information of the people with the non-compliant behaviors in real time by combining a target tracking algorithm, counting the duration of the same person and the same behavior, and uploading the result to a background management system at regular time.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
10. A human gesture recognition commissioning device characterized by a memory, a processor and a program stored and executable on said memory, said program when executed by the processor implementing the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111287237.9A CN114067358B (en) | 2021-11-02 | 2021-11-02 | Human body posture recognition method and system based on key point detection technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111287237.9A CN114067358B (en) | 2021-11-02 | 2021-11-02 | Human body posture recognition method and system based on key point detection technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114067358A true CN114067358A (en) | 2022-02-18 |
CN114067358B CN114067358B (en) | 2024-08-13 |
Family
ID=80236415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111287237.9A Active CN114067358B (en) | 2021-11-02 | 2021-11-02 | Human body posture recognition method and system based on key point detection technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114067358B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114623400A (en) * | 2022-03-22 | 2022-06-14 | 广东卫明眼视光研究院 | Sitting posture identification desk lamp system based on remote intelligent monitoring and identification method |
CN114821806A (en) * | 2022-05-19 | 2022-07-29 | 国网智能科技股份有限公司 | Method and device for determining behavior of operator, electronic equipment and storage medium |
CN114842391A (en) * | 2022-05-14 | 2022-08-02 | 云知声智能科技股份有限公司 | Motion posture identification method and system based on video |
CN115035435A (en) * | 2022-05-13 | 2022-09-09 | 湖南睿图智能科技有限公司 | Volleyball test evaluation method and device based on machine vision and electronic equipment |
CN115105821A (en) * | 2022-07-04 | 2022-09-27 | 合肥工业大学 | Gymnastics training auxiliary system based on OpenPose |
CN115331153A (en) * | 2022-10-12 | 2022-11-11 | 山东省第二人民医院(山东省耳鼻喉医院、山东省耳鼻喉研究所) | Posture monitoring method for assisting vestibule rehabilitation training |
CN115482580A (en) * | 2022-07-28 | 2022-12-16 | 广州大学 | Multi-person evaluation system based on machine vision skeletal tracking technology |
CN115497596A (en) * | 2022-11-18 | 2022-12-20 | 深圳聚邦云天科技有限公司 | Human body motion process posture correction method and system based on Internet of things |
CN115540875A (en) * | 2022-11-24 | 2022-12-30 | 成都运达科技股份有限公司 | Method and system for high-precision detection and positioning of train vehicles in station track |
CN116129524A (en) * | 2023-01-04 | 2023-05-16 | 长沙观谱红外科技有限公司 | Automatic gesture recognition system and method based on infrared image |
CN116206369A (en) * | 2023-04-26 | 2023-06-02 | 北京科技大学 | WMSD risk real-time monitoring method and device based on data fusion and machine vision |
CN116269355A (en) * | 2023-05-11 | 2023-06-23 | 江西珉轩智能科技有限公司 | Safety monitoring system based on figure gesture recognition |
CN116703227A (en) * | 2023-06-14 | 2023-09-05 | 快住智能科技(苏州)有限公司 | Guest room management method and system based on intelligent service |
CN116734412A (en) * | 2023-05-04 | 2023-09-12 | Tcl家用电器(合肥)有限公司 | Refrigeration equipment control method, device, refrigerator and computer-readable storage medium |
CN117255451A (en) * | 2023-10-24 | 2023-12-19 | 快住智能科技(苏州)有限公司 | Intelligent living guest control method and system for hotel guest room management |
CN117315776A (en) * | 2023-09-13 | 2023-12-29 | 深圳市铁越电气有限公司 | Human behavior recognition method, device, terminal equipment and storage medium |
CN117953591A (en) * | 2024-03-27 | 2024-04-30 | 中国人民解放军空军军医大学 | Intelligent limb rehabilitation assisting method and device |
CN118570838A (en) * | 2024-05-15 | 2024-08-30 | 北京联合大学 | A method and device for identifying key points of human posture for assisted rehabilitation exercise |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598554A (en) * | 2019-08-09 | 2019-12-20 | 中国地质大学(武汉) | Multi-person posture estimation method based on counterstudy |
CN111680562A (en) * | 2020-05-09 | 2020-09-18 | 北京中广上洋科技股份有限公司 | Human body posture identification method and device based on skeleton key points, storage medium and terminal |
WO2021008252A1 (en) * | 2019-07-12 | 2021-01-21 | 平安科技(深圳)有限公司 | Method and apparatus for recognizing position of person in image, computer device and storage medium |
CN112633196A (en) * | 2020-12-28 | 2021-04-09 | 浙江大华技术股份有限公司 | Human body posture detection method and device and computer equipment |
-
2021
- 2021-11-02 CN CN202111287237.9A patent/CN114067358B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021008252A1 (en) * | 2019-07-12 | 2021-01-21 | 平安科技(深圳)有限公司 | Method and apparatus for recognizing position of person in image, computer device and storage medium |
CN110598554A (en) * | 2019-08-09 | 2019-12-20 | 中国地质大学(武汉) | Multi-person posture estimation method based on counterstudy |
CN111680562A (en) * | 2020-05-09 | 2020-09-18 | 北京中广上洋科技股份有限公司 | Human body posture identification method and device based on skeleton key points, storage medium and terminal |
CN112633196A (en) * | 2020-12-28 | 2021-04-09 | 浙江大华技术股份有限公司 | Human body posture detection method and device and computer equipment |
Non-Patent Citations (1)
Title |
---|
孙宝聪;: "基于图像检测的机场人员异常行为分析技术研究", 数字通信世界, no. 01, 1 January 2020 (2020-01-01) * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114623400B (en) * | 2022-03-22 | 2023-10-20 | 广东卫明眼视光研究院 | Sitting posture identification desk lamp system and identification method based on remote intelligent monitoring |
CN114623400A (en) * | 2022-03-22 | 2022-06-14 | 广东卫明眼视光研究院 | Sitting posture identification desk lamp system based on remote intelligent monitoring and identification method |
CN115035435A (en) * | 2022-05-13 | 2022-09-09 | 湖南睿图智能科技有限公司 | Volleyball test evaluation method and device based on machine vision and electronic equipment |
CN114842391A (en) * | 2022-05-14 | 2022-08-02 | 云知声智能科技股份有限公司 | Motion posture identification method and system based on video |
CN114821806A (en) * | 2022-05-19 | 2022-07-29 | 国网智能科技股份有限公司 | Method and device for determining behavior of operator, electronic equipment and storage medium |
CN115105821A (en) * | 2022-07-04 | 2022-09-27 | 合肥工业大学 | Gymnastics training auxiliary system based on OpenPose |
CN115482580A (en) * | 2022-07-28 | 2022-12-16 | 广州大学 | Multi-person evaluation system based on machine vision skeletal tracking technology |
CN115331153B (en) * | 2022-10-12 | 2022-12-23 | 山东省第二人民医院(山东省耳鼻喉医院、山东省耳鼻喉研究所) | Posture monitoring method for assisting vestibule rehabilitation training |
CN115331153A (en) * | 2022-10-12 | 2022-11-11 | 山东省第二人民医院(山东省耳鼻喉医院、山东省耳鼻喉研究所) | Posture monitoring method for assisting vestibule rehabilitation training |
CN115497596A (en) * | 2022-11-18 | 2022-12-20 | 深圳聚邦云天科技有限公司 | Human body motion process posture correction method and system based on Internet of things |
CN115540875A (en) * | 2022-11-24 | 2022-12-30 | 成都运达科技股份有限公司 | Method and system for high-precision detection and positioning of train vehicles in station track |
CN115540875B (en) * | 2022-11-24 | 2023-03-07 | 成都运达科技股份有限公司 | Method and system for high-precision detection and positioning of train vehicles in station track |
CN116129524A (en) * | 2023-01-04 | 2023-05-16 | 长沙观谱红外科技有限公司 | Automatic gesture recognition system and method based on infrared image |
CN116206369A (en) * | 2023-04-26 | 2023-06-02 | 北京科技大学 | WMSD risk real-time monitoring method and device based on data fusion and machine vision |
CN116206369B (en) * | 2023-04-26 | 2023-06-27 | 北京科技大学 | Method and device for acquiring human body posture data based on data fusion and machine vision |
CN116734412A (en) * | 2023-05-04 | 2023-09-12 | Tcl家用电器(合肥)有限公司 | Refrigeration equipment control method, device, refrigerator and computer-readable storage medium |
CN116269355A (en) * | 2023-05-11 | 2023-06-23 | 江西珉轩智能科技有限公司 | Safety monitoring system based on figure gesture recognition |
CN116703227A (en) * | 2023-06-14 | 2023-09-05 | 快住智能科技(苏州)有限公司 | Guest room management method and system based on intelligent service |
CN116703227B (en) * | 2023-06-14 | 2024-05-03 | 快住智能科技(苏州)有限公司 | Guest room management method and system based on intelligent service |
CN117315776A (en) * | 2023-09-13 | 2023-12-29 | 深圳市铁越电气有限公司 | Human behavior recognition method, device, terminal equipment and storage medium |
CN117255451A (en) * | 2023-10-24 | 2023-12-19 | 快住智能科技(苏州)有限公司 | Intelligent living guest control method and system for hotel guest room management |
CN117255451B (en) * | 2023-10-24 | 2024-05-03 | 快住智能科技(苏州)有限公司 | Intelligent living guest control method and system for hotel guest room management |
CN117953591A (en) * | 2024-03-27 | 2024-04-30 | 中国人民解放军空军军医大学 | Intelligent limb rehabilitation assisting method and device |
CN118570838A (en) * | 2024-05-15 | 2024-08-30 | 北京联合大学 | A method and device for identifying key points of human posture for assisted rehabilitation exercise |
Also Published As
Publication number | Publication date |
---|---|
CN114067358B (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114067358B (en) | Human body posture recognition method and system based on key point detection technology | |
CN110135375B (en) | Multi-person attitude estimation method based on global information integration | |
CN106650687B (en) | Posture correction method based on depth information and skeleton information | |
CN111753747B (en) | Violent motion detection method based on monocular camera and three-dimensional attitude estimation | |
CN114220176A (en) | Human behavior recognition method based on deep learning | |
CN109522793A (en) | More people's unusual checkings and recognition methods based on machine vision | |
CN107590452A (en) | A kind of personal identification method and device based on gait and face fusion | |
CN107220604A (en) | A kind of fall detection method based on video | |
Ghazal et al. | Human posture classification using skeleton information | |
CN112989889B (en) | Gait recognition method based on gesture guidance | |
CN113920326A (en) | Tumble behavior identification method based on human skeleton key point detection | |
Sheu et al. | Improvement of human pose estimation and processing with the intensive feature consistency network | |
CN115482580A (en) | Multi-person evaluation system based on machine vision skeletal tracking technology | |
CN106295544A (en) | A kind of unchanged view angle gait recognition method based on Kinect | |
CN106815855A (en) | Based on the human body motion tracking method that production and discriminate combine | |
CN110334609B (en) | Intelligent real-time somatosensory capturing method | |
Yamao et al. | Development of human pose recognition system by using raspberry pi and posenet model | |
Bhargavas et al. | Human identification using gait recognition | |
Krzeszowski et al. | Gait recognition based on marker-less 3D motion capture | |
CN113378691A (en) | Intelligent home management system and method based on real-time user behavior analysis | |
Chai et al. | Human gait recognition: approaches, datasets and challenges | |
Weinrich et al. | Appearance-based 3D upper-body pose estimation and person re-identification on mobile robots | |
CN112036324B (en) | A human body posture determination method and system for complex multi-person scenes | |
CN115953838A (en) | Gait image tracking and identifying system based on MLP-Yolov5 network | |
Hazra et al. | A pilot study for investigating gait signatures in multi-scenario applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |