CN108932479A - A kind of human body anomaly detection method - Google Patents
A kind of human body anomaly detection method Download PDFInfo
- Publication number
- CN108932479A CN108932479A CN201810577722.1A CN201810577722A CN108932479A CN 108932479 A CN108932479 A CN 108932479A CN 201810577722 A CN201810577722 A CN 201810577722A CN 108932479 A CN108932479 A CN 108932479A
- Authority
- CN
- China
- Prior art keywords
- image
- behaviors
- abnormal
- human body
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种人体异常行为检测方法,采集固定单一背景下人的走、跑、挥拳、倒地这四种行为的视频序列,利用卡尔曼滤波算法提取运动人体目标,提取出运动人体运动目标图像,将检测出的运动人体目标图像存储下来,构建训练数据集,提取图像的Hu不变矩、图像熵和长宽比特征用于建立贝叶斯分类器,实现对四种人体异常行为的分类识别,识别率和实时性较好。采用贝叶斯分类器与卷积神经网络方法相结合的方法进行最终异常行为类别的判定,可以提高识别准确度,减少误判和漏检。本发明可以实现固定背景下的人的走、跑、挥拳、倒地这四种行为的检测与分类识别,这种方法不局限于这四种行为,也可以用于人群聚集、打架斗殴等其他异常行为的判定。
The invention relates to a method for detecting abnormal human behaviors, which collects video sequences of four behaviors of people walking, running, punching, and falling to the ground under a single fixed background, uses the Kalman filter algorithm to extract moving human targets, and extracts moving human motions. The target image stores the detected moving human target image, constructs a training data set, extracts the Hu invariant moment, image entropy and aspect ratio features of the image to establish a Bayesian classifier, and realizes four kinds of human abnormal behaviors. Classification recognition, recognition rate and real-time performance are better. The combination of Bayesian classifier and convolutional neural network method is used to determine the final abnormal behavior category, which can improve the recognition accuracy and reduce misjudgment and missed detection. The invention can realize the detection and classification recognition of the four behaviors of walking, running, punching, and falling to the ground under a fixed background. This method is not limited to these four behaviors, and can also be used for crowd gathering, fighting, etc. Judgment of other abnormal behaviors.
Description
技术领域technical field
本发明涉及一种人体识别技术,特别涉及一种基于贝叶斯分类器和卷积神经网络的人体异常行为检测方法。The invention relates to a human body recognition technology, in particular to a human body abnormal behavior detection method based on a Bayesian classifier and a convolutional neural network.
背景技术Background technique
异常行为分析技术在公共安全、智能家居等领域具有广泛应用,运用的场合包括家庭、ATM机、银行、商店、停车场、机场、政府大楼、军事基地、码头等,例如,智能家居中用来监控独居老人摔倒、昏迷,在人群密集处监控倒地、打架斗殴、人群异常聚集、骚乱等异常情况。Abnormal behavior analysis technology has a wide range of applications in public security, smart home and other fields. Monitor the fall and coma of the elderly living alone, and monitor abnormal situations such as falling to the ground, fighting, abnormal gathering of crowds, and riots in crowded places.
目前的安保手段是在公共区域加装摄像头,但是传统的监控模式主要存在两个问题:第一,需要大量监控员全天候查看视频,容易使得感官疲劳,从而导致漏检、错检;第二,多处摄像机24小时无间断拍摄,产生大量数据,为后期查找带来困难。因此,需要利用计算机进行辅助检测。The current security method is to install cameras in public areas, but there are two main problems in the traditional monitoring mode: first, a large number of monitors are required to view the video around the clock, which can easily cause sensory fatigue, resulting in missed and wrong detections; second, Multiple cameras shot 24 hours a day without interruption, generating a large amount of data, which made it difficult to find later. Therefore, computer-aided detection is needed.
在真实场景中,往往存在一些不可控因素,例如光照变化、阴影、遮挡、视角转换等,使得现有的异常行为检测方法在实际应用中受到很大限制,因此,提高异常行为分析的鲁棒性,是一个需要迫切解决的问题。In real scenes, there are often some uncontrollable factors, such as lighting changes, shadows, occlusions, viewing angle conversions, etc., which make the existing abnormal behavior detection methods very limited in practical applications. Therefore, it is necessary to improve the robustness of abnormal behavior analysis. Sexuality is a problem that needs to be solved urgently.
申请号为201610790306.0的专利提出了一种基于多尺度卷积神经网络的实时人体异常行为识别方法,将卷积神经网络增加了三维卷积、三维下采样、三位金字塔结构,来进行特征提取,并在特定的视频集中训练。The patent with the application number 201610790306.0 proposes a real-time abnormal human behavior recognition method based on multi-scale convolutional neural network, which adds three-dimensional convolution, three-dimensional downsampling, and three-dimensional pyramid structure to the convolutional neural network for feature extraction. and trained on a specific video set.
申请号为201610403457.6的专利提出了一种线性区别判定优化HMM算法的ATM人体异常行为检测方法,能根据特征矢量之间的线性区别判定结果,通过标准化处理特征向量,优化特征矢量之间的非均衡缺陷。The patent with the application number 201610403457.6 proposes an ATM human abnormal behavior detection method based on the linear difference judgment optimization HMM algorithm, which can optimize the imbalance between the feature vectors by standardizing the feature vectors according to the linear difference judgment results between the feature vectors defect.
申请号为201610104559.8的专利提出了一种基于运动检测的人体异常行为检测方法,采用背景相减法和运动历史图像相结合进行运动检测,基于上下文信息匹配的方法进行人体匹配,利用矩形框和质心的几何特征进行系统判决,来判断人体异常行为并确定入侵方向。The patent with application number 201610104559.8 proposes a human abnormal behavior detection method based on motion detection. It adopts the background subtraction method and motion history image for motion detection, and the context information matching method for human body matching. The system judges the geometric features to judge the abnormal behavior of the human body and determine the direction of intrusion.
申请号为201710544547.1的专利提出了一种基于深度学习的人体行为识别系统,涉及captcha领域,采用的神经网络模型包含embedding层、LSTM、全连接层和softmax层,具有较高分辨能力。The patent application number 201710544547.1 proposes a human behavior recognition system based on deep learning, which involves the field of captcha. The neural network model used includes embedding layer, LSTM, fully connected layer and softmax layer, which has high resolution ability.
申请号为201710496942.7的专利提出了一种基于监控系统的人体异常行为识别方法,利用Mean-shift目标跟踪算法对去噪后的前景图像进行跟踪、背景差分法提取运动目标,建立标准行为数据库,进行异常判定。The patent with the application number 201710496942.7 proposes a method for identifying abnormal human behavior based on a monitoring system. The Mean-shift target tracking algorithm is used to track the denoised foreground image, and the background difference method is used to extract moving targets. A standard behavior database is established to conduct Abnormal judgment.
申请号为201710441026.3的专利提出了一种基于深度卷积神经网络的人群异常行为检测及分析系统,通过深度卷积神经网络提取出人体对象,然后用光流法判断人体运动状态,进行聚类和人群建模,最后根据人群密度、运动向量值、持续时间量化指标数据的不同组合来识别和判定各种人群行为异常。The patent application number 201710441026.3 proposes a crowd abnormal behavior detection and analysis system based on a deep convolutional neural network. The human body object is extracted through the deep convolutional neural network, and then the optical flow method is used to judge the human body movement state, and clustering and Crowd modeling, and finally identify and judge various crowd behavior abnormalities according to different combinations of crowd density, motion vector value, and duration quantitative index data.
与以上发明专利相比,本发明采用贝叶斯分类器和卷积神经网络相结合的人体异常行为检测方法,能够有效提高识别精度,并且能够对固定单一背景下的人的走、跑、挥拳、倒地这四种行为进行检测。Compared with the above invention patents, the present invention adopts the human body abnormal behavior detection method combining Bayesian classifier and convolutional neural network, which can effectively improve the recognition accuracy, and can detect walking, running, waving of people in a fixed single background. The four behaviors of punching and falling to the ground are detected.
发明内容Contents of the invention
本发明是针对人体识别运用的问题,提出了一种人体异常行为检测方法,能够对固定单一背景下人的走、跑、挥拳、倒地这四种行为进行检测。Aiming at the problem of human body recognition application, the present invention proposes a human body abnormal behavior detection method, which can detect the four behaviors of walking, running, punching and falling to the ground under a fixed single background.
本发明的技术方案为:一种人体异常行为检测方法,具体包括如下步骤:The technical solution of the present invention is: a method for detecting abnormal human behavior, specifically comprising the following steps:
1)采集固定单一背景下人的走、跑、挥拳、倒地这四种行为的视频序列,利用卡尔曼滤波算法提取运动人体目标,提取出运动人体运动目标图像,将检测出的运动人体目标图像存储下来,构建训练数据集;1) Collect video sequences of the four behaviors of people walking, running, punching, and falling to the ground under a fixed single background, use the Kalman filter algorithm to extract moving human targets, extract moving human moving target images, and convert the detected moving human The target image is stored and the training data set is constructed;
2)利用训练数据集中的图像对贝叶斯分类器和卷积神经网络分别进行训练,得到训练好贝叶斯分类器和卷积神经网络;2) Utilize the image in the training data set to train the Bayesian classifier and the convolutional neural network respectively, and obtain the trained Bayesian classifier and the convolutional neural network;
3)对现场实时采集到的视频图像序列,对每一帧图像进行卡尔曼滤波提取运动人体目标,然后将提取出的运动人体图像分别输入训练好的贝叶斯分类器和训练好的卷积神经网络,对提取出的每一张图片分别进行贝叶斯分类器异常行为分类识别和卷积神经网络异常行为分类识别,分别获得测试结果;当检测到异常行为时,对两种测试结果进行综合判定,两种分类识别结果一致时,直接输出识别结果;当一种分类识别检测出有异常,另一种分类识别检测无异常时,给出“可能有异常”的预警,继续检测下一帧图像;当两种分类识别都检测出有异常,但检测类别不相同时,给出“存在异常,类别不定”预警,继续检测下一帧图像。3) For the video image sequence collected in real time on site, Kalman filtering is performed on each frame of image to extract the moving human body target, and then the extracted moving human body image is input into the trained Bayesian classifier and the trained convolution Neural network, for each extracted picture, the Bayesian classifier abnormal behavior classification and recognition and the convolutional neural network abnormal behavior classification and recognition are respectively obtained to obtain test results; when abnormal behavior is detected, the two test results are analyzed Comprehensive judgment, when the two classification recognition results are consistent, the recognition result is directly output; when one classification recognition detects an abnormality, and the other classification recognition detects no abnormality, an early warning of "possibly abnormal" is given, and the next detection is continued. Frame image; when the two types of classification and recognition detect abnormalities, but the detection categories are not the same, an early warning of "abnormality exists, the category is uncertain" is given, and the next frame of image detection is continued.
所述步骤2)中贝叶斯分类器的建立过程如下:首先对训练数据集中的图像,按照行走、奔跑、倒地和挥拳四种异常行为类别,分别提取图像的长宽比、图像熵和Hu不变矩三个特征,根据四种异常行为图像特征值的条件概率分布,建立贝叶斯分类器,利用贝叶斯公式实现异常行为的分类识别。The establishment process of the Bayesian classifier in the step 2) is as follows: first, for the images in the training data set, according to the four abnormal behavior categories of walking, running, falling to the ground and punching, extract the aspect ratio and image entropy of the image respectively According to the three features of Hu invariant moments and four abnormal behavior image feature values, a Bayesian classifier is established, and the Bayesian formula is used to realize the classification and recognition of abnormal behaviors.
本发明的有益效果在于:本发明人体异常行为检测方法,提取图像的Hu不变矩、图像熵和长宽比特征用于建立贝叶斯分类器,实现对四种人体异常行为的分类识别,识别率和实时性较好。采用贝叶斯分类器与卷积神经网络方法相结合的方法进行最终异常行为类别的判定,可以提高识别准确度,减少误判和漏检。本发明可以实现固定背景下的人的走、跑、挥拳、倒地这四种行为的检测与分类识别,这种方法不局限于这四种行为,也可以用于人群聚集、打架斗殴等其他异常行为的判定。The beneficial effect of the present invention is that: the method for detecting abnormal human behavior of the present invention extracts the Hu invariant moment, image entropy and aspect ratio features of the image to establish a Bayesian classifier to realize the classification and identification of four abnormal human behaviors, and identify The rate and real-time performance are better. The combination of Bayesian classifier and convolutional neural network method is used to determine the final abnormal behavior category, which can improve the recognition accuracy and reduce misjudgment and missed detection. The invention can realize the detection and classification recognition of the four behaviors of walking, running, punching, and falling to the ground under a fixed background. This method is not limited to these four behaviors, and can also be used for crowd gathering, fighting, etc. Judgment of other abnormal behaviors.
附图说明Description of drawings
图1是本发明中卡尔曼滤波算法提取运动人体目标的流程图;Fig. 1 is the flow chart of Kalman filtering algorithm extracting moving human target among the present invention;
图2a是本发明中人体行走的检测结果;Fig. 2a is the detection result of human walking in the present invention;
图2b是本发明中人体奔跑的检测结果;Fig. 2b is the detection result of human running in the present invention;
图2c是本发明中人体挥拳的检测结果;Fig. 2c is the detection result of punching by the human body in the present invention;
图2d是本发明中人体倒地的检测结果;Fig. 2d is the detection result of the human body falling to the ground in the present invention;
图3是本发明中贝叶斯分类器建立流程图;Fig. 3 is a flowchart of establishing Bayesian classifier among the present invention;
图4是本发明中卷积神经网络训练流程图;Fig. 4 is a flow chart of convolutional neural network training in the present invention;
图5是本发明中异常行为的检测和分类识别流程图。Fig. 5 is a flow chart of detection, classification and recognition of abnormal behavior in the present invention.
具体实施方式Detailed ways
本发明的具体实施方式如下:首先采集固定单一背景下人的走、跑、挥拳、倒地这四种行为的视频序列,利用卡尔曼滤波算法提取运动人体目标,提取出运动人体运动目标图像后,进行人体异常行为类别的判定。异常行为类别判定部分分为训练和测试两个部分。训练部分包括构建训练数据集,再利用训练数据集中的图像建立贝叶斯分类器,以及训练卷积神经网络。训练部分在系统使用前离线完成。训练完成后,进入测试阶段,对实时采集到的视频图像序列,对每一帧图像进行卡尔曼滤波运动目标提取,然后将提取出的运动人体图像分别进行贝叶斯分类器异常行为分类识别和卷积神经网络异常行为分类识别。然后对两种网络异常行为分类识别结果进行综合判定,给出最终的运动人体异常行为判别结果。The specific implementation mode of the present invention is as follows: first collect the video sequence of these four kinds of behaviors of people walking, running, throwing a fist, and falling to the ground under a fixed single background, use the Kalman filter algorithm to extract the moving human body target, and extract the moving human body moving target image After that, the judgment of the abnormal human behavior category is carried out. The abnormal behavior category judgment part is divided into two parts: training and testing. The training part includes building a training data set, building a Bayesian classifier using the images in the training data set, and training a convolutional neural network. The training part is done offline before the system is used. After the training is completed, it enters the testing stage. For the video image sequence collected in real time, Kalman filtering is performed on each frame of image to extract the moving target, and then the extracted moving human body images are classified and identified by the Bayesian classifier for abnormal behavior. Convolutional neural network abnormal behavior classification and recognition. Then, the classification and recognition results of the two kinds of network abnormal behaviors are comprehensively judged, and the final judgment result of the abnormal behavior of the moving human body is given.
如图1所示是卡尔曼滤波算法提取运动人体目标的流程图,包括以下步骤:As shown in Figure 1, it is a flow chart of the Kalman filter algorithm to extract moving human targets, including the following steps:
1、视频图像采集:通过固定摄像机采集固定简单背景下包含人的走、跑、挥拳、倒地四种行为的视频,将视频图像序列输入到PC机上;1. Video image acquisition: through a fixed camera to collect videos of four behaviors of people walking, running, punching, and falling to the ground under a fixed and simple background, and input the video image sequence to the PC;
2、读入一帧图像,对读入图像进行背景估计,产生初始背景图像;2. Read in a frame of image, perform background estimation on the read image, and generate an initial background image;
3、读入下一帧图像,利用卡尔曼滤波算法(卡尔曼滤波算法详见发明专利201010177661.3),根据上述初始背景图像和当前帧图像,得到当前帧图像中的前景运动目标区域;3. Read in the next frame image, and use the Kalman filter algorithm (see the invention patent 201010177661.3 for Kalman filter algorithm details), according to the above initial background image and the current frame image, to obtain the foreground moving target area in the current frame image;
4、对前景运动目标区域进行连通域计算,滤除面积小于设定阈值的干扰区域,将面积大于阈值的区域作为运动人体目标区域;4. Calculate the connected domain of the foreground moving target area, filter out the interference area whose area is smaller than the set threshold, and use the area larger than the threshold as the moving human body target area;
5、采用最小矩形框标记运动人体目标区域,对该区域图像进行异常行为检测。5. Use the smallest rectangular frame to mark the moving human body target area, and detect abnormal behavior in the image of this area.
如图2a、图2b、图2c、图2d所示分别为行人走、跑、挥拳、倒地的提取结果。图中第一幅均为原始图片,第二幅为矩形框选中运动区域,第三幅为提取出的运动目标图像。Figure 2a, Figure 2b, Figure 2c, and Figure 2d show the extraction results of pedestrians walking, running, punching, and falling down, respectively. The first picture in the picture is the original picture, the second picture is the moving area selected by the rectangle box, and the third picture is the extracted moving target image.
提取出运动人体目标图像后,进行人体异常行为类别的判定。异常行为类别判定分为训练和测试两个部分。在异常行为检测系统使用前,先根据训练数据集建立贝叶斯分类器和训练卷积神经网络进行训练,这两个过程可以分别离线完成。After extracting the image of the moving human target, the abnormal behavior category of the human body is determined. Abnormal behavior category determination is divided into two parts: training and testing. Before the abnormal behavior detection system is used, the Bayesian classifier and the convolutional neural network are trained based on the training data set. These two processes can be completed offline respectively.
如图3所示,是异常行为检测贝叶斯分类器建立的流程图。贝叶斯分类器具有以下优点:算法逻辑简单,易于实现;算法实现的时间、空间开销小,在训练大数据量时具有较高的运算速度和识别精度;算法性能稳定。采用图像长宽比、图像熵和Hu不变矩特征来区分各种异常行为图像,具有识别准确、计算简便、实时性好等优点。As shown in Figure 3, it is a flow chart of establishing the Bayesian classifier for abnormal behavior detection. The Bayesian classifier has the following advantages: the algorithm logic is simple and easy to implement; the time and space overhead for algorithm implementation is small, and it has high computing speed and recognition accuracy when training large amounts of data; the algorithm performance is stable. Using image aspect ratio, image entropy and Hu invariant moment features to distinguish various abnormal behavior images, it has the advantages of accurate recognition, simple calculation and good real-time performance.
贝叶斯分类器的建立过程如下:首先对训练数据集中的图像,按照行走、奔跑、倒地和挥拳等异常行为类别,分别提取图像的长宽比、图像熵和Hu不变矩三个特征,根据四种异常行为图像特征值的条件概率分布,建立贝叶斯分类器,利用贝叶斯公式实现异常行为的分类识别。The establishment process of the Bayesian classifier is as follows: firstly, for the images in the training data set, according to the abnormal behavior categories such as walking, running, falling to the ground, and punching, the three aspects of image aspect ratio, image entropy, and Hu invariant moments are respectively extracted. Features, according to the conditional probability distribution of four abnormal behavior image feature values, a Bayesian classifier is established, and the Bayesian formula is used to realize the classification and identification of abnormal behaviors.
如图3所示,异常行为检测贝叶斯分类器的建立具体包括以下步骤:As shown in Figure 3, the establishment of a Bayesian classifier for abnormal behavior detection specifically includes the following steps:
1.构建训练数据集。用来构建训练数据集的视频可以采用现场实时采集的视频,要求视频中应包含要进行异常行为检测与分类的人体动作,例如倒地、奔跑、挥拳等。本发明采用公开的KTH动作识别数据集,该数据集包括600个视频片段,每个视频片段包含120帧左右,每帧图像大小为160×120像素。该数据集视频中图像背景固定,每一帧包含一个人体目标。本发明使用该数据集中人体的行走、挥拳、奔跑和倒地四种异常行为视频片段构建训练数据集。1. Construct the training data set. The video used to construct the training data set can be the video collected in real time on site, and it is required that the video should contain human actions for abnormal behavior detection and classification, such as falling to the ground, running, punching, etc. The present invention adopts the public KTH action recognition data set, which includes 600 video clips, each video clip contains about 120 frames, and the image size of each frame is 160×120 pixels. The image background in this dataset video is fixed, and each frame contains a human target. The present invention constructs a training data set by using video clips of four abnormal behaviors of the human body in the data set: walking, punching, running and falling to the ground.
构建训练数据集的步骤如下:对视频图像序列,逐帧进行卡尔曼滤波运动目标检测,提取运动人体目标。卡尔曼滤波方法同前,处理流程见图1。将检测出的运动人体目标图像存储下来,构建本发明的训练数据集。由于视频中包含人体行走、挥拳、奔跑和倒地等异常行为,所以构建的训练数据集中包含了上述四种异常行为的图像。The steps of constructing the training data set are as follows: For the video image sequence, the Kalman filter moving target detection is performed frame by frame, and the moving human target is extracted. The Kalman filtering method is the same as before, and the processing flow is shown in Figure 1. The detected images of moving human targets are stored to construct the training data set of the present invention. Since the video contains abnormal behaviors such as walking, punching, running, and falling to the ground, the constructed training data set contains images of the above four abnormal behaviors.
2.特征值提取2. Feature value extraction
分别提取数据集中四种异常行为图像的长宽比、图像熵和Hu不变矩三个特征值,具体步骤如下:The aspect ratio, image entropy and Hu invariant moments of the four abnormal behavior images in the data set were extracted respectively. The specific steps are as follows:
(1)图像长宽比的提取(1) Extraction of image aspect ratio
如图2a、2b、2c、2d所示,分割出的目标使用最小矩形框进行标记,计算最小矩形框的长宽比即为目标图像长宽比特征值。As shown in Figures 2a, 2b, 2c, and 2d, the segmented target is marked with the smallest rectangular frame, and the aspect ratio of the smallest rectangular frame is calculated as the feature value of the aspect ratio of the target image.
(2)图像熵的提取(2) Extraction of image entropy
选择图像的邻域灰度均值作为灰度分布的空间特征量,与图像的像素灰度值组成特征二元组,记为(i,j),其中i表示像素的灰度值(0<=i<=255),j表示图像邻域灰度均值(0<=j<=255),f(i,j)为特征二元组(i,j)出现的频数。联合概率Pij为:Select the neighborhood gray value of the image as the spatial feature quantity of the gray distribution, and form a feature pair with the pixel gray value of the image, denoted as (i, j), where i represents the gray value of the pixel (0<= i<=255), j represents the average gray value of the image neighborhood (0<=j<=255), and f(i, j) is the frequency of occurrence of the feature pair (i, j). The joint probability P ij is:
Pij=f(i,j)/MNP ij =f(i,j)/MN
表示特征二元组(i,j)在图像中占的比例,其中,MN为图像的总像素数。图像的二维熵为:Indicates the proportion of the feature pair (i, j) in the image, where MN is the total number of pixels in the image. The two-dimensional entropy of the image is:
(3)图像Hu不变矩的提取(3) Extraction of image Hu invariant moments
本发明采用图像的Hu5不变矩作为特征值。图像G(x,y)的二维(p+q)阶矩定义为:The present invention uses the Hu5 invariant moment of the image as the feature value. The two-dimensional (p+q) moment of the image G(x, y) is defined as:
其中G(x,y)表示图像在空间(x,y)上的灰度值,x,y表示图像像素点的坐标。Among them, G(x, y) represents the gray value of the image in the space (x, y), and x, y represents the coordinates of the image pixel.
相应的中心距定义为:The corresponding center distance is defined as:
其中,in,
归一化(p+q)阶中心矩定义为:The normalized (p+q) order central moment is defined as:
图像的Hu5不变矩φ5定义为:The Hu5 invariant moment φ5 of an image is defined as:
φ5=(η30-3η12)(η30+η12)[(η30+η12)2-3(η21+η03)2]+(3η21-η03)(η21+η03)[3(η30+η12)2-(η21+η03)2]φ 5 =(η 30 -3η 1 2)(η 30 +η 12 )[(η 30 +η 12 ) 2 -3(η 21 +η 03 ) 2 ]+(3η 21 -η 03 )(η 21 + η 03 )[3(η 30 +η 12 ) 2 -(η 21 +η 03 ) 2 ]
通过上述公式可以计算得到图像的Hu5不变矩特征值。The Hu5 invariant moment eigenvalue of the image can be calculated by the above formula.
3.根据特征值建立贝叶斯分类器3. Establish Bayesian classifier based on eigenvalues
具体步骤如下:(贝叶斯分类器建立方法详见发明专利201510981972.8):The specific steps are as follows: (see the invention patent 201510981972.8 for the establishment method of the Bayesian classifier):
(1)对上文中提取出的图像长宽比、图像熵和Hu不变矩三个特征值,分别计算四种异常行为图像的特征值的均值和方差,得到特征值的条件概率分布。对本发明采用的数据集中的图像特征提取结果见表1。不同异常行为其长宽比数值范围不同,其中倒地的长宽比的数值范围是0.7~1.3;正常行走的长宽比的数值范围0.3~0.5;奔跑的长宽比的数值范围0.28~0.45;挥拳的长宽比的数值范围0.4~0.55。计算出均值c和方差σ2,分别求得走、跑、挥拳、倒地四种行为对应三种特征值的高斯概率分布,即为类条件概率密度函数:(1) For the three eigenvalues of image aspect ratio, image entropy and Hu invariant moments extracted above, calculate the mean and variance of the eigenvalues of the four abnormal behavior images respectively, and obtain the conditional probability distribution of the eigenvalues. See Table 1 for the image feature extraction results in the data set used in the present invention. Different abnormal behaviors have different aspect ratio value ranges, among which the value range of the aspect ratio of falling to the ground is 0.7-1.3; the value range of the aspect ratio of normal walking is 0.3-0.5; the value range of the aspect ratio of running is 0.28-0.45 ; The value range of the aspect ratio of punching is 0.4~0.55. Calculate the mean c and variance σ 2 , and obtain the Gaussian probability distributions corresponding to the three eigenvalues of the four behaviors of walking, running, punching, and falling to the ground, which are the class conditional probability density functions:
其中x为特征值,c为该特征值的均值,σ2为该特征值的方差。Where x is the eigenvalue, c is the mean value of the eigenvalue, and σ2 is the variance of the eigenvalue.
例如,根据表1中的数据,得到行人正常行走类图像的Hu5不变矩特征值概率分布的函数是:For example, according to the data in Table 1, the function to obtain the probability distribution of Hu5 invariant moment eigenvalues of images of pedestrians walking normally is:
表1Table 1
(2)建立朴素贝叶斯分类器(2) Establish a naive Bayesian classifier
设x={a1,a2,a3}为一个待分类项,而每个a为x的一个特征属性,其中a1表示图像长宽比,a2表示图像熵,a3表示图像Hu不变矩。Let x={a 1 , a 2 , a 3 } be an item to be classified, and each a is a feature attribute of x, where a 1 represents the aspect ratio of the image, a 2 represents the image entropy, and a 3 represents the image Hu invariant moment.
异常行为类别集合为C={y1,y2,y3,y4}。其中y1表示行走,y2表示奔跑,y3挥拳,y4表示倒地。The set of abnormal behavior categories is C={y 1 , y 2 , y 3 , y 4 }. Among them, y 1 means walking, y 2 means running, y 3 means punching, and y 4 means falling to the ground.
根据贝叶斯公式计算后验概率贝叶斯公式为:Calculate the posterior probability according to the Bayesian formula The Bayesian formula is:
其中i取1,2,3,4;P(X|y1),P(X|y2),P(X|y3),P(X|y4)分别表示走、跑、挥拳、倒地四种行为的类条件概率密度。Among them, i takes 1, 2, 3, 4; P(X|y 1 ), P(X|y 2 ), P(X|y 3 ), P(X|y 4 ) represent walking, running and punching respectively , Class-conditional probability densities of the four behaviors of falling to the ground.
式中P(x)为是每种类别的先验概率,是常数。比较各类别后验概率的大小即比较 In the formula, P (x) is the prior probability of each category and is a constant. Comparing Posterior Probabilities for Each Category the size of the comparison
由于各个特征属性是条件独立的,所以有:Since each feature attribute is conditionally independent, there are:
如果则x∈yk,k=1,2,3,4。if Then x∈y k , k=1, 2, 3, 4.
贝叶斯分类器可以转化为:Bayesian classifier can be transformed into:
对于一个测试样本特征值X,计算出哪种行为对应的概率最大,即归为哪一类行为。For a test sample feature value X, calculate which behavior corresponds to the highest probability, that is, which type of behavior it belongs to.
对于异常行为检测与识别,采用一种分类器会存在漏检和误判的情况,因此,本发明采用两种分类判别方法相结合的方法,以减小异常行为检测和分类错误。采用的第二种判别方法是卷积神经网络。利用卷积神经网络进行异常行为检测,由于减少了特征提取环节,具有简便易行的优点。For abnormal behavior detection and recognition, there will be missed detection and misjudgment if one classifier is used. Therefore, the present invention uses a method of combining two classification and discrimination methods to reduce abnormal behavior detection and classification errors. The second discriminative method employed is a convolutional neural network. The use of convolutional neural network for abnormal behavior detection has the advantage of simplicity and ease of operation due to the reduction of feature extraction links.
如表2所示,是建立用于异常行为检测的卷积神经网络的参数。本发明采用VGG-16卷积神经网络模型实现异常行为检测。本发明建立卷积神经网络采用的计算机环境配置是:采用的计算机为2.5GHZ*4 CPU、8GB内存、windows10 64位操作系统的PC机。首先安装anaconda环境,然后Pip安装tensorflow,最后安装PyCharm,即安装所需的软件包以运行卷积神经网络。As shown in Table 2, it is the parameters for building a convolutional neural network for abnormal behavior detection. The invention adopts the VGG-16 convolutional neural network model to realize abnormal behavior detection. The configuration of the computer environment used by the present invention to establish the convolutional neural network is: the computer used is a PC with 2.5GHZ*4 CPU, 8GB memory, and windows10 64-bit operating system. First install the anaconda environment, then Pip install tensorflow, and finally install PyCharm, that is, install the required packages to run the convolutional neural network.
表2Table 2
建立卷积神经网络的具体方法如下:如表2所示,本发明选择9层卷积神经网络结构,分别为卷积层-池化层-卷积层-池化层-卷积层-池化层-卷积层-全连接层-输出层。其中输入图像设置为120*120大小,Layer0是一个卷积层,利用50个Feature Map特征图构成,每个Feature Map中的神经元与5*5的邻域相连。Feature Map大小设置为116*116,防止输入的连接跑到边界外面。The specific method of setting up the convolutional neural network is as follows: as shown in Table 2, the present invention selects 9 layers of convolutional neural network structures, which are respectively convolutional layer-pooling layer-convolutional layer-pooling layer-convolutional layer-pool layer-convolution layer-fully connected layer-output layer. The input image is set to a size of 120*120, and Layer0 is a convolutional layer, which is composed of 50 Feature Map feature maps, and the neurons in each Feature Map are connected to a 5*5 neighborhood. The Feature Map size is set to 116*116 to prevent the input connection from running outside the boundary.
Layer1是下采样,有50个58*58的Feature Map,邻域连接选择2*2,每个FeatureMap的大小是Layer0中的1/4(行和列各1/2)。Layer1 is downsampling. There are 50 Feature Maps of 58*58. The neighborhood connection is 2*2. The size of each FeatureMap is 1/4 of Layer0 (1/2 for each row and column).
Layer2也是一个卷积层,卷积核为5*5,该层中每个特征map是连接到上一层所有50个或者其它特征map的,表示不同的Feature Map组合方式。Layer2 is also a convolutional layer with a convolution kernel of 5*5. Each feature map in this layer is connected to all 50 or other feature maps in the previous layer, representing different combinations of Feature Maps.
Layer3是下采样层,由100个32*32大小的特征图构成。Layer3 is a downsampling layer consisting of 100 feature maps of 32*32 size.
Layer4是卷积层,通过3*3的卷积核卷积上一层,得到30*30个神经元。Layer4 is a convolutional layer, which convolves the previous layer through a 3*3 convolutional kernel to obtain 30*30 neurons.
Layer5是下采样层,有200个15*15大小的Feature Map组成。邻域连接选择2*2。Layer5 is a downsampling layer consisting of 200 Feature Maps of size 15*15. Select 2*2 for the neighborhood connection.
Layer6是卷积层,有300个Feature Map。Layer6 is a convolutional layer with 300 Feature Maps.
Layer7是100个单元,计算输入向量和权重向量之间的点积,再加上一个偏置。Layer7 is 100 units and computes the dot product between the input vector and the weight vector, plus a bias.
Layer8是输出层,输出层根据欧式径向基函数(Euclidean Radial BasisFunction)单元组成,每类一个单元,每个有100个输入,最终得到输出。对输入的样本数据,采用softmax进行分类,输出该样本图像属于异常行为各个类别的概率,哪个概率最大,就将该图像判定为概率最大的类别。Layer8 is the output layer. The output layer is composed of Euclidean Radial Basis Function (Euclidean Radial BasisFunction) units, one unit for each type, each with 100 inputs, and finally an output. For the input sample data, softmax is used to classify, and the probability that the sample image belongs to each category of abnormal behavior is output. Whichever probability is the largest, the image is judged as the category with the highest probability.
卷积神经网络建立之后,要利用训练数据集中的图像对网络进行训练,然后才能对新输入的一张测试图像进行正确的分类识别。After the convolutional neural network is established, it is necessary to use the images in the training data set to train the network, and then it can correctly classify and identify a newly input test image.
如图4所示,是异常行为检测与分类卷积神经网络的训练流程图。将上述经卡尔曼滤波构建的训练数据集中的所有图像,归一化为卷积神经网络适合的尺寸,然后输入到根据上述参数建立的卷积神经网络,对此网络进行训练。As shown in Figure 4, it is the training flow chart of abnormal behavior detection and classification convolutional neural network. All the images in the training data set constructed by Kalman filtering are normalized to the appropriate size of the convolutional neural network, and then input to the convolutional neural network established according to the above parameters to train the network.
训练过程就是让网络自动找到图像表征特性的过程,关键是利用前向传播和反向传播。前向传播对输入的图像进行特征表示,然后进行类别划分。反向传播根据误差减小的方法更新网络权重和偏差值。具体步骤如下:The training process is the process of allowing the network to automatically find the image representation characteristics, the key is to use forward propagation and back propagation. The forward propagation performs feature representation on the input image, and then classifies it. Backpropagation updates the network weights and bias values according to the method of error reduction. Specific steps are as follows:
1.首先从训练数据集中选取一部分训练数据;1. First select a part of the training data from the training data set;
2通过前向传播,获取网络的预测值。对输入的样本信息逐层进行计算,将输出结果转换成网络所期望的类的数量。训练模型的各个参数,让网络模型能够对特征进行更好的表示,网络执行的计算过程其实就是输入信息与每层的权值小矩阵进行点乘操作,加上偏置项参数个数,就是最后的输出结果。2 Obtain the predicted value of the network through forward propagation. The input sample information is calculated layer by layer, and the output result is converted into the number of classes expected by the network. The parameters of the training model enable the network model to better represent the features. The calculation process performed by the network is actually the point multiplication operation of the input information and the small weight matrix of each layer, plus the number of bias item parameters, which is The final output.
3.通过反向传播,更新网络参数变量。将网络输入层的误差依次反向传播到前面层,根据误差的反向传播,计算卷积层的权重和偏差的梯度,更新模型参数。按极小化误差方法的方向传播来调整权值矩阵。3. Through backpropagation, update the network parameter variables. The error of the input layer of the network is backpropagated to the previous layer in turn, and the weight of the convolution layer and the gradient of the deviation are calculated according to the backpropagation of the error, and the model parameters are updated. The weight matrix is adjusted by the direction propagation of the minimization error method.
对卷积神经网络模型的卷积层和池化层进行初始化设置,通过前向传播训练模型的期望参数值,得到损失误差,并将结果缓存在网络中,然后通过反向传播将误差传递到前面的池化层和卷积层中,计算训练模型的梯度的和,更新权重和偏差。当样本达到训练次数或者达到训练精度或训练次数,训练过程结束,反之继续进行上述过程。Initialize the convolutional layer and pooling layer of the convolutional neural network model, and obtain the loss error by forward propagating the expected parameter values of the training model, and cache the result in the network, and then pass the error to the In the previous pooling layer and convolutional layer, the sum of the gradients of the training model is calculated, and the weights and biases are updated. When the sample reaches the number of training times or reaches the training accuracy or the number of training times, the training process ends, otherwise continue the above process.
训练阶段结束后,进入测试阶段。训练阶段一般在异常行为检测系统使用前离线完成。在现场对异常行为进行检测判定属于测试阶段。After the training phase, enter the testing phase. The training phase is generally done offline before the abnormal behavior detection system is used. Detecting and judging abnormal behaviors on site belongs to the testing phase.
如图5所示,建立好贝叶斯分类器和训练好卷积神经网络以后,对现场实时采集到的视频图像序列,可以进行异常行为的检测和分类识别。具体步骤如下:As shown in Figure 5, after the Bayesian classifier is established and the convolutional neural network is trained, abnormal behaviors can be detected and classified for the video image sequences collected in real time on site. Specific steps are as follows:
1.读入一帧视频图像。1. Read in a frame of video image.
2.卡尔曼滤波运动目标提取。对每一帧图像进行卡尔曼滤波运动目标提取(方法同前)。如果没有检测到运动目标,继续读入下一帧图像。如果检测到图像中有运动目标,将提取出的运动人体图像分别进行贝叶斯分类器和卷积神经网络异常行为类别判别。2. Kalman filter moving target extraction. Carry out Kalman filter moving target extraction on each frame of image (the method is the same as before). If no moving target is detected, continue to read the next frame of image. If a moving target is detected in the image, the extracted moving human body image is subjected to Bayesian classifier and convolutional neural network abnormal behavior category discrimination respectively.
3.贝叶斯分类器判别。对提取出的运动人体图像,首先分别提取图像的长宽比、图像熵和Hu不变矩特征,然后将这些特征值输入到上述建立好的贝叶斯分类器,根据贝叶斯公式进行异常行为类别判定。3. Bayesian classifier discrimination. For the extracted moving human body image, first extract the aspect ratio, image entropy and Hu invariant moment features of the image respectively, and then input these feature values into the established Bayesian classifier above, and perform anomaly analysis according to the Bayesian formula Behavior category determination.
4.卷积神经网络类别判定。将提取出的运动人体图像,归一化为卷积神经网络适合的尺寸,然后输入到卷积神经网络,输出该图像属于四种异常行为类别的概率,将概率最大的作为该图像的异常行为类别。例如,某一帧图像输入到网络的输出结果为:预测结果:跑0.9546,倒地0.0030,行走0.0113,挥拳0.03097,则该图像的判定结果为跑。4. Convolutional neural network category determination. Normalize the extracted moving human body image to the appropriate size of the convolutional neural network, and then input it to the convolutional neural network, output the probability that the image belongs to the four abnormal behavior categories, and use the highest probability as the abnormal behavior of the image category. For example, if a certain frame of image is input to the network, the output result is: prediction result: running 0.9546, falling to the ground 0.0030, walking 0.0113, punching 0.03097, then the judgment result of the image is running.
5.综合判定。对贝叶斯分类器和卷积神经网络两种判别结果进行综合,给出最终的判别结果。5. Comprehensive judgment. The two discriminant results of Bayesian classifier and convolutional neural network are combined to give the final discriminant result.
综合判定的情况表如表3所示:The situation table of the comprehensive judgment is shown in Table 3:
(1)当两种分类识别都没有检测出异常时,持续检测下一帧。(1) When no abnormality is detected by the two types of classification and identification, continue to detect the next frame.
(2)当一种分类识别检测出有异常,另一种分类识别检测无异常时,给出“可能有异常”的预警,继续检测下一帧图像。(2) When one type of classification recognition detects an abnormality and the other type of classification recognition detects no abnormality, an early warning of "possible abnormality" is given, and the next frame of image detection is continued.
(3)当两种分类识别都检测出有异常,但检测类别不相同时,给出“存在异常,类别不定”预警。继续检测下一帧图像。(3) When anomalies are detected in both classifications and recognitions, but the detection categories are different, an early warning of "abnormality exists, category is uncertain" is given. Continue to detect the next frame of image.
(4)当两种分类识别都检测出有异常,检测异常类别相同,例如都为“倒地”时,给出“存在异常,类别为倒地”的预警。继续检测下一帧图像。(4) When both classifications and identifications detect anomalies, and the categories of the detected anomalies are the same, for example, when both are "fallen down", an early warning of "there is an anomaly and the category is fallen down" is given. Continue to detect the next frame of image.
表3table 3
(注:表中A、B为上述异常行为中的任意一种)(Note: A and B in the table are any of the above abnormal behaviors)
采用两种分类器综合判定。每一种分类器的识别正确率很难做到100%准确,都存在误判及漏检的情况,两种分类器综合判定,增大了识别的准确率,可以避免漏检,提早做好可能出现异常行为的预警。当一种分类器对异常行为类别判断错误时,另一种分类器可以避免个别的识别错误,只有当两个分类器判别结果一致时,才给出异常行为类别的判定,增加了可信度。由于摄像头视频采集一般为每秒25帧左右,所以持续对下一帧进行识别,在实时性上不影响对异常行为的检测。Two classifiers are used for comprehensive judgment. The recognition accuracy rate of each classifier is difficult to achieve 100% accuracy, and there are cases of misjudgment and missed detection. The comprehensive judgment of the two classifiers increases the accuracy of recognition, can avoid missed detection, and do it early An early warning of possible unusual behavior. When one classifier makes a wrong judgment on the abnormal behavior category, the other classifier can avoid individual recognition errors. Only when the two classifiers have the same discrimination results, can the judgment of the abnormal behavior category be given, which increases the credibility. . Since the camera video capture is generally about 25 frames per second, the next frame is continuously recognized, which does not affect the detection of abnormal behavior in real time.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810577722.1A CN108932479A (en) | 2018-06-06 | 2018-06-06 | A kind of human body anomaly detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810577722.1A CN108932479A (en) | 2018-06-06 | 2018-06-06 | A kind of human body anomaly detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108932479A true CN108932479A (en) | 2018-12-04 |
Family
ID=64449995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810577722.1A Pending CN108932479A (en) | 2018-06-06 | 2018-06-06 | A kind of human body anomaly detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108932479A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918996A (en) * | 2019-01-17 | 2019-06-21 | 平安科技(深圳)有限公司 | Method, system, computer equipment and storage medium for identifying illegal actions of persons |
CN110062379A (en) * | 2019-04-15 | 2019-07-26 | 哈尔滨工程大学 | Identity identifying method based on channel state information under a kind of human body behavior scene |
CN110119710A (en) * | 2019-05-13 | 2019-08-13 | 广州锟元方青医疗科技有限公司 | Cell sorting method, device, computer equipment and storage medium |
CN110163143A (en) * | 2019-05-17 | 2019-08-23 | 国网河北省电力有限公司沧州供电分公司 | Unlawful practice recognition methods, device and terminal device |
CN110245581A (en) * | 2019-05-25 | 2019-09-17 | 天津大学 | A Human Behavior Recognition Method Based on Deep Learning and Range-Doppler Sequence |
CN110309698A (en) * | 2019-03-21 | 2019-10-08 | 绵阳师范学院 | Automatic identification method of abnormal behavior of moving human body |
CN110765860A (en) * | 2019-09-16 | 2020-02-07 | 平安科技(深圳)有限公司 | Tumble determination method, tumble determination device, computer apparatus, and storage medium |
CN110956143A (en) * | 2019-12-03 | 2020-04-03 | 交控科技股份有限公司 | Abnormal behavior detection method and device, electronic equipment and storage medium |
CN111547209A (en) * | 2020-05-22 | 2020-08-18 | 合肥利元杰信息科技有限公司 | Drowning prevention safety guarantee method, device and system |
CN111597992A (en) * | 2020-05-15 | 2020-08-28 | 哈尔滨工业大学 | Scene object abnormity identification method based on video monitoring |
CN111753587A (en) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method and device for detecting falling to ground |
CN111918034A (en) * | 2020-07-28 | 2020-11-10 | 上海电机学院 | Embedded unattended base station intelligent monitoring system |
CN112200081A (en) * | 2020-10-10 | 2021-01-08 | 平安国际智慧城市科技股份有限公司 | Abnormal behavior identification method, device, electronic device and storage medium |
CN112507984A (en) * | 2021-02-03 | 2021-03-16 | 苏州澳昆智能机器人技术有限公司 | Conveying material abnormity identification method, device and system based on image identification |
CN112784725A (en) * | 2021-01-15 | 2021-05-11 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method and device, storage medium and forklift |
CN112783327A (en) * | 2021-01-29 | 2021-05-11 | 中国科学院计算技术研究所 | Method and system for gesture recognition based on surface electromyogram signals |
CN114973020A (en) * | 2022-06-15 | 2022-08-30 | 北京鹏鹄物宇科技发展有限公司 | Abnormal behavior analysis method based on satellite monitoring video |
CN115050105A (en) * | 2022-08-17 | 2022-09-13 | 杭州觅睿科技股份有限公司 | Method, device and equipment for judging doubtful shadow and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110170749A1 (en) * | 2006-09-29 | 2011-07-14 | Pittsburgh Pattern Recognition, Inc. | Video retrieval system for human face content |
CN103976739A (en) * | 2014-05-04 | 2014-08-13 | 宁波麦思电子科技有限公司 | Wearing type dynamic real-time fall detection method and device |
CN105631414A (en) * | 2015-12-23 | 2016-06-01 | 上海理工大学 | Vehicle-borne multi-obstacle classification device and method based on Bayes classifier |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
CN106778796A (en) * | 2016-10-20 | 2017-05-31 | 江苏大学 | Human motion recognition method and system based on hybrid cooperative model training |
CN107169415A (en) * | 2017-04-13 | 2017-09-15 | 西安电子科技大学 | Human motion recognition method based on convolutional neural networks feature coding |
-
2018
- 2018-06-06 CN CN201810577722.1A patent/CN108932479A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110170749A1 (en) * | 2006-09-29 | 2011-07-14 | Pittsburgh Pattern Recognition, Inc. | Video retrieval system for human face content |
CN103976739A (en) * | 2014-05-04 | 2014-08-13 | 宁波麦思电子科技有限公司 | Wearing type dynamic real-time fall detection method and device |
CN105631414A (en) * | 2015-12-23 | 2016-06-01 | 上海理工大学 | Vehicle-borne multi-obstacle classification device and method based on Bayes classifier |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
CN106778796A (en) * | 2016-10-20 | 2017-05-31 | 江苏大学 | Human motion recognition method and system based on hybrid cooperative model training |
CN107169415A (en) * | 2017-04-13 | 2017-09-15 | 西安电子科技大学 | Human motion recognition method based on convolutional neural networks feature coding |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918996A (en) * | 2019-01-17 | 2019-06-21 | 平安科技(深圳)有限公司 | Method, system, computer equipment and storage medium for identifying illegal actions of persons |
CN110309698A (en) * | 2019-03-21 | 2019-10-08 | 绵阳师范学院 | Automatic identification method of abnormal behavior of moving human body |
CN111753587A (en) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method and device for detecting falling to ground |
CN111753587B (en) * | 2019-03-28 | 2023-09-29 | 杭州海康威视数字技术股份有限公司 | Ground falling detection method and device |
CN110062379A (en) * | 2019-04-15 | 2019-07-26 | 哈尔滨工程大学 | Identity identifying method based on channel state information under a kind of human body behavior scene |
CN110062379B (en) * | 2019-04-15 | 2022-10-28 | 哈尔滨工程大学 | Identity authentication method based on channel state information under human behavior scene |
CN110119710A (en) * | 2019-05-13 | 2019-08-13 | 广州锟元方青医疗科技有限公司 | Cell sorting method, device, computer equipment and storage medium |
CN110163143A (en) * | 2019-05-17 | 2019-08-23 | 国网河北省电力有限公司沧州供电分公司 | Unlawful practice recognition methods, device and terminal device |
CN110245581A (en) * | 2019-05-25 | 2019-09-17 | 天津大学 | A Human Behavior Recognition Method Based on Deep Learning and Range-Doppler Sequence |
CN110245581B (en) * | 2019-05-25 | 2023-04-07 | 天津大学 | Human behavior recognition method based on deep learning and distance-Doppler sequence |
CN110765860A (en) * | 2019-09-16 | 2020-02-07 | 平安科技(深圳)有限公司 | Tumble determination method, tumble determination device, computer apparatus, and storage medium |
CN110765860B (en) * | 2019-09-16 | 2023-06-23 | 平安科技(深圳)有限公司 | Tumble judging method, tumble judging device, computer equipment and storage medium |
CN110956143A (en) * | 2019-12-03 | 2020-04-03 | 交控科技股份有限公司 | Abnormal behavior detection method and device, electronic equipment and storage medium |
CN111597992A (en) * | 2020-05-15 | 2020-08-28 | 哈尔滨工业大学 | Scene object abnormity identification method based on video monitoring |
CN111597992B (en) * | 2020-05-15 | 2023-04-18 | 哈尔滨工业大学 | Scene object abnormity identification method based on video monitoring |
CN111547209A (en) * | 2020-05-22 | 2020-08-18 | 合肥利元杰信息科技有限公司 | Drowning prevention safety guarantee method, device and system |
CN111918034A (en) * | 2020-07-28 | 2020-11-10 | 上海电机学院 | Embedded unattended base station intelligent monitoring system |
CN112200081A (en) * | 2020-10-10 | 2021-01-08 | 平安国际智慧城市科技股份有限公司 | Abnormal behavior identification method, device, electronic device and storage medium |
CN112200081B (en) * | 2020-10-10 | 2024-11-22 | 平安国际智慧城市科技股份有限公司 | Abnormal behavior identification method, device, electronic device and storage medium |
CN112784725A (en) * | 2021-01-15 | 2021-05-11 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method and device, storage medium and forklift |
CN112784725B (en) * | 2021-01-15 | 2024-06-07 | 北京航天自动控制研究所 | Pedestrian anti-collision early warning method, device, storage medium and stacker |
CN112783327B (en) * | 2021-01-29 | 2022-08-30 | 中国科学院计算技术研究所 | Method and system for gesture recognition based on surface electromyogram signals |
CN112783327A (en) * | 2021-01-29 | 2021-05-11 | 中国科学院计算技术研究所 | Method and system for gesture recognition based on surface electromyogram signals |
CN112507984A (en) * | 2021-02-03 | 2021-03-16 | 苏州澳昆智能机器人技术有限公司 | Conveying material abnormity identification method, device and system based on image identification |
CN114973020A (en) * | 2022-06-15 | 2022-08-30 | 北京鹏鹄物宇科技发展有限公司 | Abnormal behavior analysis method based on satellite monitoring video |
CN115050105A (en) * | 2022-08-17 | 2022-09-13 | 杭州觅睿科技股份有限公司 | Method, device and equipment for judging doubtful shadow and storage medium |
CN115050105B (en) * | 2022-08-17 | 2022-12-30 | 杭州觅睿科技股份有限公司 | Method, device and equipment for judging doubtful shadow and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932479A (en) | A kind of human body anomaly detection method | |
Du et al. | Hierarchical recurrent neural network for skeleton based action recognition | |
Devries et al. | Multi-task learning of facial landmarks and expression | |
Elhamod et al. | Automated real-time detection of potentially suspicious behavior in public transport areas | |
Xie et al. | Feature consistency-based prototype network for open-set hyperspectral image classification | |
Hu | Design and implementation of abnormal behavior detection based on deep intelligent analysis algorithms in massive video surveillance | |
Gowsikhaa et al. | Suspicious Human Activity Detection from Surveillance Videos. | |
CN111881750A (en) | Crowd abnormity detection method based on generation of confrontation network | |
Parashar et al. | Deep learning pipelines for recognition of gait biometrics with covariates: a comprehensive review | |
Soleimanitaleb et al. | Single object tracking: A survey of methods, datasets, and evaluation metrics | |
CN105046195A (en) | Human behavior identification method based on asymmetric generalized Gaussian distribution model (AGGD) | |
Zhao et al. | Robust unsupervised motion pattern inference from video and applications | |
Kobayashi et al. | Three-way auto-correlation approach to motion recognition | |
US11682111B2 (en) | Semi-supervised classification of microorganism | |
CN105404894A (en) | Target tracking method used for unmanned aerial vehicle and device thereof | |
WO2020088763A1 (en) | Device and method for recognizing activity in videos | |
Xia et al. | Face occlusion detection using deep convolutional neural networks | |
CN114972735A (en) | Anti-occlusion moving target tracking device and method based on ROI prediction and multi-module learning | |
Wang et al. | Learning to find reliable correspondences with local neighborhood consensus | |
Mishra et al. | Real-Time pedestrian detection using YOLO | |
Talukdar et al. | Human action recognition system using good features and multilayer perceptron network | |
Kajendran et al. | Recognition and detection of unusual activities in ATM using dual-channel capsule generative adversarial network | |
Kokila et al. | Efficient abnormality detection using patch-based 3D convolution with recurrent model | |
Modi et al. | Automated anomaly detection and multi-label anomaly classification in crowd scenes based on optimal thresholding and deep learning strategy | |
Ghuge et al. | An Integrated Approach Using Optimized Naive Bayes Classifier and Optical Flow Orientation for Video Object Retrieval. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181204 |
|
RJ01 | Rejection of invention patent application after publication |