CN108229251A - A kind of action identification method and device - Google Patents
A kind of action identification method and device Download PDFInfo
- Publication number
- CN108229251A CN108229251A CN201611160942.1A CN201611160942A CN108229251A CN 108229251 A CN108229251 A CN 108229251A CN 201611160942 A CN201611160942 A CN 201611160942A CN 108229251 A CN108229251 A CN 108229251A
- Authority
- CN
- China
- Prior art keywords
- distance
- value
- fitting value
- area
- fitting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及智能家居领域,尤其涉及一种动作识别方法及装置。The invention relates to the field of smart home, in particular to an action recognition method and device.
背景技术Background technique
随着智能科技的发展,智能家居行业也在不断的发展,而在智能家居中,摄像头是必不可少的,如,通过摄像头对家居活动中的行为主体的动作进行识别,以达到室内安全监控、个人安全监控、个人娱乐等等目的。With the development of smart technology, the smart home industry is also constantly developing, and in smart homes, cameras are essential, for example, through cameras to identify the actions of behavior subjects in home activities to achieve indoor security monitoring , personal security monitoring, personal entertainment and other purposes.
现有技术下,通过摄像头进行动作识别时,需要先基于摄像头提供的视频图像,提取行为主体的全身特征,尤其是躯干和下肢的特征,然后,再通过分析特征获得行为主体的运动模式,再建立视频图像内容与行为类型之间的映射关系,并基于该映射关系,才能进行动作识别。Under the existing technology, when performing action recognition through the camera, it is necessary to extract the whole body features of the subject based on the video images provided by the camera, especially the features of the trunk and lower limbs, and then obtain the movement pattern of the subject by analyzing the features, and then The mapping relationship between video image content and behavior type is established, and based on the mapping relationship, action recognition can be performed.
然而,在智能家居中,动作识别通常应用于实时场景的,具有一定的时效性,而且,实时场景下行为主体的动作有一定的复杂性和多样性,若采用现有技术下的动作识别中的一系列复杂算法,进行动作识别,时效性会大大降低,难以满足智能家居的应用需要。However, in smart homes, action recognition is usually applied to real-time scenes and has a certain timeliness. Moreover, the actions of actors in real-time scenes have certain complexity and diversity. A series of complex algorithms for motion recognition, the timeliness will be greatly reduced, and it is difficult to meet the application needs of smart homes.
而且,现有技术下动作识别方法中,需要提取行为主体的全身特征,尤其是躯干和下肢的特征,然而,在家居场景中,对于下肢被遮挡的情景(如,在餐桌前坐下或站起、在沙发边上坐下或站起),无法适用。Moreover, in the action recognition method in the prior art, it is necessary to extract the whole body features of the behavior subject, especially the features of the trunk and lower limbs. up, sit down on the edge of the sofa, or stand up), not applicable.
因此,需要设计一种新的动作识别方法,以克服上述缺陷。Therefore, it is necessary to design a new action recognition method to overcome the above defects.
发明内容Contents of the invention
本发明实施例提供一种动作识别方法及装置,用以解决现有技术中存在的因摄像头视线遮挡不能准确进行动作识别的问题。Embodiments of the present invention provide an action recognition method and device, which are used to solve the problem in the prior art that action recognition cannot be performed accurately due to the occlusion of the camera line of sight.
本发明实施例提供的具体技术方案如下:The specific technical scheme that the embodiment of the present invention provides is as follows:
一种动作识别方法,包括:A method for action recognition, comprising:
提取预设时间段内的视频影像;Extract video images within a preset time period;
分别确定所述视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合;Respectively determine the area fitting value and the distance fitting value of the face in each frame picture included in the video image, and obtain the area fitting value set and the distance fitting value set;
从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值;Extracting a first distance fitting value and a second distance fitting value from the distance fitting value set according to preset rules;
基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作。When it is determined that the first distance fitting value and the second distance fitting value satisfy a preset condition based on the set of area fitting values, it is determined that a target action occurs within the preset time period.
可选的,确定所述视频影像包含的任意一帧画面中的人脸的面积拟合值和距离拟合值,包括:Optionally, determining the area fitting value and distance fitting value of the face in any frame included in the video image includes:
对所述视频影像包含的任意一帧画面进行人脸检测,确定人脸区域;Perform face detection on any frame of the video image to determine the face area;
基于所述人脸区域,确定人脸在所述画面中的坐标位置;Based on the face area, determine the coordinate position of the face in the picture;
基于所述坐标位置,分别计算人脸的面积值和距离值,所述距离值为人脸与所在画面的下水平线之间的距离;Based on the coordinate position, calculate the area value and the distance value of the human face respectively, and the distance value is the distance between the human face and the lower horizontal line of the picture;
基于所述面积值和所述距离值,分别确定所述面积值对应的面积拟合值,以及所述距离值对应的距离拟合值。Based on the area value and the distance value, an area fitting value corresponding to the area value and a distance fitting value corresponding to the distance value are respectively determined.
可选的,所述从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值,包括:Optionally, the extraction of the first distance fitting value and the second distance fitting value from the distance fitting value set according to a preset rule includes:
记录所述距离拟合值集合中的每一个距离拟合值,组成离散的距离拟合值曲线;Recording each distance fitting value in the distance fitting value set to form a discrete distance fitting value curve;
确定与所述预设时间段的起始点最近的一对波峰和波谷;determining a pair of peaks and troughs closest to the starting point of the preset time period;
提取所述波峰和所述波谷,并分别确定所述波峰为所述第一距离拟合值,以及所述波谷为所述第二距离拟合值。Extracting the peak and the trough, and determining the peak as the first distance fitting value and the trough as the second distance fitting value respectively.
可选的,基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作,包括:Optionally, when it is determined that the first distance fitting value and the second distance fitting value satisfy a preset condition based on the set of area fitting values, it is determined that the target action occurs within the preset time period, include:
分别确定所述第一距离拟合值对应的第一帧时刻和所述第二距离拟合值对应的第二帧时刻;Respectively determine a first frame moment corresponding to the first distance fitting value and a second frame moment corresponding to the second distance fitting value;
在所述面积拟合值集合中,提取出所述第二帧时刻对应的第一面积拟合值;Extracting a first area fitting value corresponding to the second frame moment from the set of area fitting values;
判定所述第二帧时刻与所述第一帧时刻之间的绝对差值,位于预设的第一帧时刻阈值和预设的第二帧时刻阈值之间;并且,determining that the absolute difference between the second frame time and the first frame time is between a preset first frame time threshold and a preset second frame time threshold; and,
判定所述第二距离拟合值与所述第一距离拟合值之间的绝对差值,位于两倍的所述第一面积拟合值和三倍的所述第一面积拟合值之间时;determining the absolute difference between the second distance fitting value and the first distance fitting value, which is between twice the first area fitting value and three times the first area fitting value time;
确定在所述预设时间段内发生目标动作。It is determined that the target action occurs within the preset time period.
可选的,确定在所述预设时间段内发生目标动作之后,还包括:Optionally, after determining that the target action occurs within the preset time period, the method further includes:
若所述第二距离拟合值高于所述第一距离拟合值时,则确定所述目标动作为站起;If the second distance fitting value is higher than the first distance fitting value, then determining that the target action is to stand up;
若所述第二距离拟合值低于所述第一距离拟合值时,则确定所述目标动作为坐下。If the second distance fitting value is lower than the first distance fitting value, it is determined that the target action is sitting down.
一种动作识别装置,包括:An action recognition device, comprising:
第一提取单元,用于提取预设时间段内的视频影像;a first extraction unit, configured to extract video images within a preset time period;
第一确定单元,用于分别确定所述视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合;The first determination unit is configured to respectively determine an area fitting value and a distance fitting value of a human face in each frame of the video image, and obtain a set of area fitting values and a set of distance fitting values;
第二提取单元,用于从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值;The second extraction unit is configured to extract a first distance fitting value and a second distance fitting value from the distance fitting value set according to preset rules;
第二确定单元,用于基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作。The second determination unit is configured to, based on the set of area fitting values, determine that when the first distance fitting value and the second distance fitting value satisfy a preset condition, determine that the occurrence occurs within the preset time period target action.
可选的,确定所述视频影像包含的任意一帧画面中的人脸的面积拟合值和距离拟合值时,所述第一确定单元用于:Optionally, when determining the area fitting value and the distance fitting value of the face in any frame included in the video image, the first determination unit is used for:
对所述视频影像包含的任意一帧画面进行人脸检测,确定人脸区域;Perform face detection on any frame of the video image to determine the face area;
基于所述人脸区域,确定人脸在所述画面中的坐标位置;Based on the face area, determine the coordinate position of the face in the picture;
基于所述坐标位置,分别计算人脸的面积值和距离值,所述距离值为人脸与所在画面的下水平线之间的距离;Based on the coordinate position, calculate the area value and the distance value of the human face respectively, and the distance value is the distance between the human face and the lower horizontal line of the picture;
基于所述面积值和所述距离值,分别确定所述面积值对应的面积拟合值,以及所述距离值对应的距离拟合值。Based on the area value and the distance value, an area fitting value corresponding to the area value and a distance fitting value corresponding to the distance value are respectively determined.
可选的,所述从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值时,所述第二提取单元用于:Optionally, when the first distance fitting value and the second distance fitting value are extracted from the distance fitting value set according to preset rules, the second extraction unit is used for:
记录所述距离拟合值集合中的每一个距离拟合值,组成离散的距离拟合值曲线;Recording each distance fitting value in the distance fitting value set to form a discrete distance fitting value curve;
确定与所述预设时间段的起始点最近的一对波峰和波谷;determining a pair of peaks and troughs closest to the starting point of the preset time period;
提取所述波峰和所述波谷,并分别确定所述波峰为所述第一距离拟合值,以及所述波谷为所述第二距离拟合值。Extracting the peak and the trough, and determining the peak as the first distance fitting value and the trough as the second distance fitting value respectively.
可选的,基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作时,所述第二确定单元用于:Optionally, based on the set of area fitting values, when it is determined that the first distance fitting value and the second distance fitting value meet a preset condition, it is determined that when a target action occurs within the preset time period , the second determining unit is used for:
分别确定所述第一距离拟合值对应的第一帧时刻和所述第二距离拟合值对应的第二帧时刻;Respectively determine a first frame moment corresponding to the first distance fitting value and a second frame moment corresponding to the second distance fitting value;
在所述面积拟合值集合中,提取出所述第二帧时刻对应的第一面积拟合值;Extracting a first area fitting value corresponding to the second frame moment from the set of area fitting values;
判定所述第二帧时刻与所述第一帧时刻之间的绝对差值,位于预设的第一帧时刻阈值和预设的第二帧时刻阈值之间;并且,determining that the absolute difference between the second frame time and the first frame time is between a preset first frame time threshold and a preset second frame time threshold; and,
判定所述第二距离拟合值与所述第一距离拟合值之间的绝对差值,位于两倍的所述第一面积拟合值和三倍的所述第一面积拟合值之间时;determining the absolute difference between the second distance fitting value and the first distance fitting value, which is between twice the first area fitting value and three times the first area fitting value time;
确定在所述预设时间段内发生目标动作。It is determined that the target action occurs within the preset time period.
可选的,确定在所述预设时间段内发生目标动作之后,所述第二确定单元还用于:Optionally, after determining that the target action occurs within the preset time period, the second determining unit is further configured to:
若所述第二距离拟合值高于所述第一距离拟合值时,则确定所述目标动作为站起;If the second distance fitting value is higher than the first distance fitting value, then determining that the target action is to stand up;
若所述第二距离拟合值低于所述第一距离拟合值时,则确定所述目标动作为坐下。If the second distance fitting value is lower than the first distance fitting value, it is determined that the target action is sitting down.
本发明实施例中,先提取预设时间段内的视频影像,确定视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合,从距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值,基于面积拟合值集合,确定提取的第一距离拟合值和第二距离拟合值满足预设条件时,确定在预设时间段内发生目标动作,这样,仅仅只需要检测行为主体的面部,就可实时的完成动作识别,有效避免了因摄像头视线遮挡,而无法确定行为主体四肢和躯干,进而无法进行准确的动作识别的问题。In the embodiment of the present invention, the video image within the preset time period is first extracted, the area fitting value and the distance fitting value of the face in each frame included in the video image are determined, and the area fitting value set and the distance fitting value set are obtained. The combined value set, from the distance fitted value set, according to the preset rules, extract the first distance fitted value and the second distance fitted value, based on the area fitted value set, determine the extracted first distance fitted value and When the second distance fitting value satisfies the preset condition, it is determined that the target action occurs within the preset time period. In this way, the action recognition can be completed in real time only by detecting the face of the subject, effectively avoiding the occlusion of the camera line of sight. However, it is impossible to determine the limbs and torso of the subject, and thus cannot perform accurate action recognition.
附图说明Description of drawings
图1为本发明实施例中动作识别系统结构图;Fig. 1 is a structural diagram of an action recognition system in an embodiment of the present invention;
图2为本发明实施例中动作识别方法流程图;FIG. 2 is a flowchart of an action recognition method in an embodiment of the present invention;
图3为本发明实施例中示例坐标位置图Fig. 3 is an example coordinate position map in the embodiment of the present invention
图4为本发明实施例中示例曲线图Fig. 4 is an example graph in the embodiment of the present invention
图5位本发明实施例中动作识别装置图。Fig. 5 is a diagram of an action recognition device in an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,并不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
为了解决在实时场景下,现有技术对行为主体进行动作识别的时效性低,以及在家居场景中,对于下肢被遮挡的情景(如,在餐桌前坐下或站起、在沙发边上坐下或站起),无法准确进行动作识别的问题,本发明实施例中,设计了一种动作识别的方法,该方法为,先提取预设时间段内的视频影像,确定视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合,从距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值,基于面积拟合值集合,确定提取的第一距离拟合值和第二距离拟合值满足预设条件时,确定在预设时间段内发生目标动作。In order to solve the low timeliness of the existing technology for action recognition of the subject in real-time scenarios, and in the home scenario, for scenarios where the lower limbs are covered (such as sitting down or standing up at the dining table, sitting on the edge of the sofa) down or stand up), the problem that motion recognition cannot be performed accurately, in the embodiment of the present invention, a method of motion recognition is designed, the method is to first extract video images within a preset time period, and determine each The area fitting value and the distance fitting value of the face in a frame are obtained, and the area fitting value set and the distance fitting value set are obtained, and the first distance fitting value is extracted from the distance fitting value set according to the preset rules. The combined value and the second distance fitting value are determined based on the area fitting value set, and when the extracted first distance fitting value and the second distance fitting value satisfy a preset condition, it is determined that the target action occurs within a preset time period.
参阅图1所示,本发明实施例中,利用智能摄像头对被监测区域进行实时监测,并将被监测区域内的视频影像实时的传输到人体运动分析平台,人体运动分析平台对接收到的视频影像进行实时的分析处理,以识别出位于被监测区域内的行为主体发生的目标动作,其中,智能摄像头可通过无线网络或有线网络的形式,将视频影像传输到人体运动分析平台处。Referring to Fig. 1, in the embodiment of the present invention, a smart camera is used to monitor the monitored area in real time, and the video image in the monitored area is transmitted to the human body motion analysis platform in real time, and the human body motion analysis platform is used for the received video. The image is analyzed and processed in real time to identify the target action of the subject in the monitored area. Among them, the smart camera can transmit the video image to the human motion analysis platform through a wireless network or a wired network.
具体的,参阅图2所示,人体运动分析平台根据接收到的视频影像,进行动作识别的具体过程如下:Specifically, as shown in Figure 2, the human motion analysis platform performs the specific process of motion recognition based on the received video images as follows:
步骤200:人体运动分析平台提取预设时间段内的视频影像。Step 200: The human motion analysis platform extracts video images within a preset time period.
具体的,人体运动分析平台按照预设时间段,提取位于该预设时间段内的视频影像。Specifically, the human body motion analysis platform extracts video images within the preset time period according to the preset time period.
预设时间段可以根据实际需求进行调整,例如,若对半个小时内的视频影像中的行为主体的动作进行识别,那么,人体运动分析平台可以提取半个小时内的视频影像,若对10分钟内的视频影像中的行为主体的动作进行识别,那么,人体运动分析平台可以提取10分钟内的视频影像。The preset time period can be adjusted according to actual needs. For example, if the action of the subject in the video image within half an hour is recognized, then the human motion analysis platform can extract the video image within half an hour. If the movement of the subject in the video image within 10 minutes is recognized, then the human motion analysis platform can extract the video image within 10 minutes.
预设时间段也可以根据人体运动分析平台处理视频影像的能力进行划分,例如,若人体运动分析平台一次性处理视频影像的容量为1T,根据即将被提取的视频影像的分辨率,计算能容纳的时间段S,并将该时间段S作为预设时间段。The preset time period can also be divided according to the ability of the human motion analysis platform to process video images. For example, if the capacity of the human motion analysis platform to process video images at one time is 1T, according to the resolution of the video images to be extracted, the calculation can accommodate time period S, and use this time period S as the preset time period.
本发明实施例中,对预设时间段的设定方式不做限定。In the embodiment of the present invention, there is no limitation on the way of setting the preset time period.
步骤210:人体运动分析平台对提取到的视频影像中包含的每一帧画面进行人脸检测,确定人脸区域。Step 210: The human body motion analysis platform performs face detection on each frame contained in the extracted video image, and determines the face area.
具体的,人体运动分析平台将提取到的视频影像,按照帧格式,拆分成一帧一帧的画面,并采用人脸检测算法,对每一帧画面进行人脸检测,并对检测到人脸的每一帧画面中的人脸进行标注,确定人脸区域,较佳的,本发明实施例中,采用方框对人脸进行标注。Specifically, the human motion analysis platform splits the extracted video images into frame-by-frame pictures according to the frame format, and uses the face detection algorithm to detect the face of each frame of the picture, and detects the face The face in each frame of the picture is marked, and the face area is determined. Preferably, in the embodiment of the present invention, the face is marked with a box.
步骤220:人体运动分析平台基于每一帧画面中的人脸区域,确定各自画面中人脸的坐标位置。Step 220: The human body motion analysis platform determines the coordinate position of the face in each frame based on the face area in each frame.
具体的,人体运动分析平台在检测到的人脸的每一帧画面中,已确定人脸区域,基于每一帧画面中确定的人脸区域,确定每一帧画面中的人脸的坐标位置,具体参阅图3所示。Specifically, the human motion analysis platform has determined the face area in each frame of the detected face, and determined the coordinate position of the face in each frame based on the determined face area in each frame , as shown in Figure 3 for details.
例如,已确定第T帧画面中检测到人脸,并已确定人脸区域,即,小方框,以第T帧画面的左上顶点为坐标点,建立坐标系,确定包含人脸区域(小方框)的各个顶点的坐标位置,该小方框的坐标组为{X1T,X2T,Y1T,Y2T},以及确定该画面的最大宽度Xf和最大高度Yf。For example, it has been determined that a human face is detected in the Tth frame picture, and the human face area, that is, the small box, is established as a coordinate point with the upper left vertex of the Tth frame picture, and it is determined to include the human face area (small box) The coordinate position of each vertex of the box), the coordinate group of the small box is {X1 T , X2 T , Y1 T , Y2 T }, and determine the maximum width X f and maximum height Y f of the picture.
步骤230:人体运动分析平台基于每一帧画面中人脸的坐标位置,分别计算人脸的面积值和距离值,其中,距离值为人脸与所在画面的下水平线之间的距离。Step 230: The human motion analysis platform calculates the area value and the distance value of the face based on the coordinate position of the face in each frame, where the distance value is the distance between the face and the lower horizontal line of the frame.
具体的,人体运动分析平台已确定检测到人脸的每一帧画面中人脸的坐标位置,基于人脸的坐标位置,分别计算人脸的面积值和人脸的距离值。Specifically, the human motion analysis platform has determined the coordinate position of the human face in each frame of the detected human face, and based on the coordinate position of the human face, the area value and the distance value of the human face are respectively calculated.
继续沿用上述示例,参阅图3所示,计算第T帧画面中人脸的面积值和距离值,较佳的,本发明实施例中,采用如下公式计算获得第T帧画面中人脸的面积值ST:Continue to use the above-mentioned example, referring to Fig. 3, calculate the area value and the distance value of the human face in the T-th frame picture, preferably, in the embodiment of the present invention, use the following formula to calculate the area of the human face in the T-th frame picture Value S T :
ST=Y2T-Y1T S T = Y2 T -Y1 T
其中,Y2T和Y1T均为第T帧画面中人脸区域在Y坐标上的坐标值。Wherein, Y2 T and Y1 T are the coordinate values of the face area in the T-th frame on the Y coordinate.
较佳的,本发明实施例中,采用如下公式计算获得人脸的距离值HT:Preferably, in the embodiment of the present invention, the following formula is used to calculate and obtain the distance value H T of the human face:
HT=Yf-(Y1T+Y2T)/2H T =Y f -(Y1 T +Y2 T )/2
其中,Y2T和Y1T均为第T帧画面中人脸区域在Y坐标上的坐标值,Yf为第T帧画面的最大高度值。Among them, Y2 T and Y1 T are the coordinate values of the face area in the T-th frame on the Y coordinate, and Y f is the maximum height value of the T-th frame.
步骤240:人体运动分析平台基于每一帧画面中获得的人脸的面积值和距离值,确定各自对应的面积拟合值和距离拟合值,并获得面积拟合值集合和距离拟合值集合。Step 240: Based on the area value and distance value of the face obtained in each frame, the human motion analysis platform determines the corresponding area fitting value and distance fitting value, and obtains a set of area fitting values and a distance fitting value gather.
具体的,在步骤230中,人体运动分析平台获得的每一帧画面中人脸的面积值和距离值,是基于各自对应的当前一个帧时刻确定的,然而,行为主体在执行一个动作,如,蹲下,通常是连续性的,因此,当前一个帧时刻的对应的画面中人脸的面积值和距离值,通常还与前一个帧时刻对应的画面中人脸的面积值和距离值,甚至与前两个帧时刻对应的画面中人脸的面积值和距离值有关。Specifically, in step 230, the area value and distance value of the face in each frame obtained by the human motion analysis platform are determined based on the corresponding current frame moment. However, when the subject is performing an action, such as , squatting, is usually continuous, therefore, the area value and distance value of the face in the corresponding frame at the current frame time are usually also the area value and distance value of the face in the frame corresponding to the previous frame time, It is even related to the area value and distance value of the face in the picture corresponding to the first two frame moments.
进一步地,在得到某一帧画面的人脸的面积值和距离值后,结合该画面的前一个帧时刻对应的画面中人脸的面积值和距离值,前两个帧时刻对应的画面中人脸的面积值和距离值,以及前三个帧时刻对应的画面中人脸的面积值和距离值计算,获得该画面中人脸的面积拟合值和距离拟合值。Further, after obtaining the area value and distance value of the human face of a certain frame of picture, combined with the area value and distance value of the human face in the picture corresponding to the previous frame time of the picture, in the picture corresponding to the previous two frame time The area value and distance value of the face, and the area value and distance value of the face in the frame corresponding to the first three frame moments are calculated to obtain the area fitting value and the distance fitting value of the face in the frame.
较佳的,本发明实施例中,仍以第T帧画面为例,采用如下公式获得第T帧时刻对应的画面中人脸的距离拟合值HCT:Preferably, in the embodiment of the present invention, still taking the T-th frame as an example, the following formula is used to obtain the distance fitting value H T of the human face in the frame corresponding to the T-th frame:
其中,HT为第T帧时刻对应的画面中人脸的距离值,HT-1为第(T-1)帧时刻对应的画面中人脸的距离值,即第T帧时刻的前一个帧时刻对应的画面中人脸的距离值,HT-2为第(T-2)帧时刻对应的画面中人脸的距离值,即第T帧时刻的前两个帧时刻对应的画面中人脸的距离值,HT-3为第(T-3)帧时刻对应的画面中人脸的距离值,即第T帧时刻的前三个帧时刻对应的画面中人脸的距离值。Among them, H T is the distance value of the human face in the picture corresponding to the T frame time, H T-1 is the distance value of the human face in the picture corresponding to the (T-1) frame time, that is, the previous T frame time The distance value of the face in the picture corresponding to the frame time, H T-2 is the distance value of the face in the picture corresponding to the (T-2)th frame time, that is, the distance value of the face in the picture corresponding to the first two frame times of the T-th frame time The distance value of the human face, H T-3 is the distance value of the human face in the picture corresponding to the (T-3) frame time, that is, the distance value of the human face in the picture corresponding to the first three frame times of the T-th frame time.
较佳的,本发明实施例中,仍以第T帧画面为例,采用如下公式获得第T帧时刻对应的画面中人脸的面积拟合值SCT:Preferably, in the embodiment of the present invention, still taking the T-th frame as an example, the following formula is used to obtain the fitting value SC T of the face area in the frame corresponding to the T-th frame:
其中,ST为第T帧时刻对应的画面中的人脸的面积值,SCT-1为第(T-1)帧时刻对应的画面中人脸的距离拟合值,即第T帧时刻的前一个帧时刻对应的画面中人脸的面积值。Among them, S T is the area value of the face in the picture corresponding to the T-th frame time, SC T-1 is the distance fitting value of the face in the picture corresponding to the (T-1) frame time, that is, the T-th frame time The area value of the face in the frame corresponding to the previous frame moment.
至此,经过上述步骤,获得第T帧时刻对应的画面中人脸的距离拟合值和面积拟合值,由于上述是以第T帧画面为例,因此,只获得了第T帧画面中人脸的距离拟合值和面积拟合值,对于已确定存在人脸的每一帧画面,均采用上述方式获得各自的人脸的距离拟合值和面积拟合值,并将获得的若干距离拟合值组成距离拟合值集合,将获得的若干面积拟合值组成面积拟合值集合。So far, after the above steps, the distance fitting value and the area fitting value of the face in the frame corresponding to the Tth frame time are obtained. The distance fitting value and area fitting value of the face, for each frame of the picture that has been determined to have a human face, the distance fitting value and area fitting value of the respective face are obtained by the above method, and the obtained several distances The fitted values form a set of fitted distance values, and the obtained several fitted values of area form a set of fitted values of area.
例如,假设当前确定在第8-10帧画面中检测到了人脸,若第8帧画面中的人脸的距离拟合值和面积拟合值分别为HC8和SC8,若第9帧画面中的人脸的距离拟合值和面积拟合值分别为HC9和SC9,若第10帧画面中的人脸的距离拟合值和面积拟合值分别为HC10和SC10,那么,获得的距离拟合值集合为{HC8、HC9、HC10},获得的面积拟合值集合为{SC8、SC9、SC10}。For example, assuming that it is currently determined that a human face is detected in frames 8-10, if the distance fitting value and area fitting value of the face in the 8th frame are HC 8 and SC 8 respectively, if the 9th frame The distance fitting value and area fitting value of the face in are HC 9 and SC 9 respectively, if the distance fitting value and area fitting value of the face in the tenth frame are HC 10 and SC 10 respectively, then , the obtained distance fitting value set is {HC 8 , HC 9 , HC 10 }, and the obtained area fitting value set is {SC 8 , SC 9 , SC 10 }.
步骤250:人体运动分析平台基于获得的距离拟合值集合,按照预设规则,提取出第一距离拟合值和第二距离拟合值。Step 250: Based on the obtained distance fitting value set, the human motion analysis platform extracts a first distance fitting value and a second distance fitting value according to preset rules.
具体的,人体运动分析平台记录获得的距离拟合值集合中的每一个距离拟合值,组成离散的距离拟合值曲线,并确定该曲线中与预设时间段的起始点最近的一对波峰和波谷,提取出该波峰和波谷,并将该波峰确定为第一距离拟合值,将该波谷确定为第二距离拟合值。Specifically, the human motion analysis platform records each distance fitting value in the obtained distance fitting value set to form a discrete distance fitting value curve, and determines the pair closest to the starting point of the preset time period in the curve The peak and the trough are extracted, the peak and the trough are extracted, the peak is determined as the first distance fitting value, and the trough is determined as the second distance fitting value.
例如,假设在步骤210中,视频影像被拆分为m帧画面,m帧画面各自对应的距离拟合值分别为:HCT-m+1,HCT-m+2,…,HCT-2,HCT-1,HCT,将上述m个距离拟合值组成离散的距离拟合值曲线,其中,HCT对应的帧时刻为预设时间段的起始点,具体参阅图4所示。For example, assuming that in step 210, the video image is split into m frames, the distance fitting values corresponding to each of the m frames are: HC T-m+1 , HC T-m+2 , ..., HC T- 2 , HC T-1 , HC T , the above m distance fitting values are combined into a discrete distance fitting value curve, wherein the frame time corresponding to HC T is the starting point of the preset time period, as shown in Figure 4 for details .
进一步地,确定在该距离拟合值曲线中,与T帧时刻最近的一对波峰和波谷,对应于图4,可确定(T-1)帧时刻与(T-2)帧时刻各自对应的距离拟合值为最近的一对波峰和波谷,即,确定(T-1)帧时刻对应的距离拟合值HCT-1为波峰,确定(T-2)帧时刻对应的距离拟合值HCT-2为波谷。Further, it is determined that in the distance fitting value curve, a pair of peaks and troughs closest to the T frame time, corresponding to Figure 4, can be determined (T-1) frame time and (T-2) frame time corresponding to each The distance fitting value is the closest pair of peaks and troughs, that is, determine the distance fitting value HC corresponding to the (T-1) frame time T-1 is the peak, and determine the distance fitting value corresponding to the (T-2) frame time HC T-2 for the trough.
步骤260:人体运动分析平台基于获得的面积拟合值集合,确定提取的第一距离拟合值和第二距离拟合值满足预设条件时,确定在预设时间段内发生目标动作。Step 260: Based on the obtained set of area fitting values, the human motion analysis platform determines that the target action occurs within a preset time period when the extracted first distance fitting value and second distance fitting value satisfy a preset condition.
具体的,分别确定第一距离拟合值对应的第一帧时刻和第二距离拟合值对应的第二帧时刻,并在获得的面积拟合值集合,提取出第二帧时刻对应的第一面积拟合值,判定第二帧时刻与第一帧时刻之间的绝对差值,位于预设的第一帧时刻阈值和预设的第二帧时刻阈值之间,并且,判定第二距离拟合值与第一距离拟合值之间的绝对差值,位于两倍的所述第一面积拟合值和三倍的所述第一面积拟合值之间时,确定在预设时间段内发生目标动作。Specifically, respectively determine the first frame time corresponding to the first distance fitting value and the second frame time corresponding to the second distance fitting value, and extract the first frame time corresponding to the second frame time from the obtained area fitting value set An area fitting value to determine the absolute difference between the second frame time and the first frame time, which is located between the preset first frame time threshold and the second second frame time threshold, and determine the second distance When the absolute difference between the fitted value and the first distance fitted value is between twice the first area fitted value and three times the first area fitted value, it is determined at a preset time The target action occurs within the segment.
本发明实施例中,预设的第一帧时刻阈值和第二帧时刻阈值是选取人体在执行目标动作时的经验值,是经过长期训练获得的,同样,在判断第二距离拟合值与第一距离拟合值之间的绝对差值时,选取的“两倍”和“三倍”也是在长期训练执行目标动作时获得的,是可动态调整的。In the embodiment of the present invention, the preset first frame time threshold and the second frame time threshold are selected experience values of the human body when performing the target action, and are obtained through long-term training. Similarly, when judging the second distance fitting value and For the absolute difference between the first distance fitting values, the selected "double" and "triple" are also obtained during long-term training to perform the target action, and can be adjusted dynamically.
较佳的,本发明实施例中,采用如下公式判定是否在预设时间段内发生目标动作:Preferably, in the embodiment of the present invention, the following formula is used to determine whether the target action occurs within the preset time period:
tmin≤|L-K|≤tmax;并且,t min ≤ |LK| ≤ t max ; and,
2×SCL≤|HCL-HCK|≤3×SCL 2×SC L ≤|HC L -HC K |≤3×SC L
其中,在公式一中,tmin为预设的第一帧时刻阈值,tmax为预设的第二帧时刻阈值,K为波峰(第一距离拟合值)对应的帧时刻,L为波谷(第二距离拟合值)对应的帧时刻。Wherein, in Formula 1, t min is the preset first frame time threshold, t max is the preset second frame time threshold, K is the frame time corresponding to the wave peak (the first distance fitting value), and L is the wave trough (the second distance fitting value) corresponds to the frame moment.
其中,在公式二中,HCK为第一距离拟合值,HCL为第二距离拟合值,SCL为第一面积拟合值。Wherein, in Formula 2, H K is the first distance fitting value, HCL is the second distance fitting value, and SCL is the first area fitting value.
进一步地,确定行为主体发生目标动作后,可以通过第二距离拟合值与第一距离拟合值之间的关系,进一步判断目标动作是站起还是为坐下,若第二距离拟合值高于第一距离拟合值时,确定目标动作为站起;若第二距离拟合值低于第一距离拟合值时,确定目标动作为坐下。Furthermore, after it is determined that the target action is taken by the behavior subject, the relationship between the second distance fitting value and the first distance fitting value can be used to further judge whether the target action is standing up or sitting down, if the second distance fitting value When the fitting value of the second distance is higher than the fitting value of the first distance, it is determined that the target action is to stand up; if the fitting value of the second distance is lower than the fitting value of the first distance, it is determined that the target action is to sit down.
较佳的,本发明实施例中,确定发生目标动作后,采用如下公式进一步判断目标动作为站起还是坐下:Preferably, in the embodiment of the present invention, after determining that the target action occurs, the following formula is used to further determine whether the target action is standing up or sitting down:
其中,HCK为第一距离拟合值,HCL为第二距离拟合值。Wherein, HC K is the fitting value of the first distance, and HCL is the fitting value of the second distance.
参阅图5所示,本申请实施例中,动作识别装置至少包括第一提取单元50、第一确定单元51、第二提取单元52和第二确定单元53,其中,Referring to Fig. 5, in the embodiment of the present application, the action recognition device includes at least a first extraction unit 50, a first determination unit 51, a second extraction unit 52 and a second determination unit 53, wherein,
第一提取单元50,用于提取预设时间段内的视频影像;The first extracting unit 50 is used to extract video images within a preset time period;
第一确定单元51,用于分别确定所述视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合;The first determining unit 51 is configured to respectively determine an area fitting value and a distance fitting value of a human face in each frame of the video image, and obtain a set of area fitting values and a set of distance fitting values;
第二提取单元52,用于从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值;The second extraction unit 52 is configured to extract a first distance fitting value and a second distance fitting value from the distance fitting value set according to preset rules;
第二确定单元53,用于基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作。The second determining unit 53 is configured to determine that the first distance fitting value and the second distance fitting value meet a preset condition based on the set of area fitting values, and determine that within the preset time period Target action occurs.
可选的,确定所述视频影像包含的任意一帧画面中的人脸的面积拟合值和距离拟合值时,所述第一确定单元51用于:Optionally, when determining the area fitting value and distance fitting value of the face in any frame included in the video image, the first determining unit 51 is configured to:
对所述视频影像包含的任意一帧画面进行人脸检测,确定人脸区域;Perform face detection on any frame of the video image to determine the face area;
基于所述人脸区域,确定人脸在所述画面中的坐标位置;Based on the face area, determine the coordinate position of the face in the picture;
基于所述坐标位置,分别计算人脸的面积值和距离值,所述距离值为人脸与所在画面的下水平线之间的距离;Based on the coordinate position, calculate the area value and the distance value of the human face respectively, and the distance value is the distance between the human face and the lower horizontal line of the picture;
基于所述面积值和所述距离值,分别确定所述面积值对应的面积拟合值,以及所述距离值对应的距离拟合值。Based on the area value and the distance value, an area fitting value corresponding to the area value and a distance fitting value corresponding to the distance value are respectively determined.
可选的,所述从所述距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值时,所述第二提取单元52用于:Optionally, when the first distance fitting value and the second distance fitting value are extracted from the distance fitting value set according to preset rules, the second extracting unit 52 is configured to:
记录所述距离拟合值集合中的每一个距离拟合值,组成离散的距离拟合值曲线;Recording each distance fitting value in the distance fitting value set to form a discrete distance fitting value curve;
确定与所述预设时间段的起始点最近的一对波峰和波谷;determining a pair of peaks and troughs closest to the starting point of the preset time period;
提取所述波峰和所述波谷,并分别确定所述波峰为所述第一距离拟合值,以及所述波谷为所述第二距离拟合值。Extracting the peak and the trough, and determining the peak as the first distance fitting value and the trough as the second distance fitting value respectively.
可选的,基于所述面积拟合值集合,确定所述第一距离拟合值和所述第二距离拟合值满足预设条件时,确定在所述预设时间段内发生目标动作时,所述第二确定单元53用于:Optionally, based on the set of area fitting values, when it is determined that the first distance fitting value and the second distance fitting value meet a preset condition, it is determined that when a target action occurs within the preset time period , the second determining unit 53 is used for:
分别确定所述第一距离拟合值对应的第一帧时刻和所述第二距离拟合值对应的第二帧时刻;Respectively determine a first frame moment corresponding to the first distance fitting value and a second frame moment corresponding to the second distance fitting value;
在所述面积拟合值集合中,提取出所述第二帧时刻对应的第一面积拟合值;Extracting a first area fitting value corresponding to the second frame moment from the set of area fitting values;
判定所述第二帧时刻与所述第一帧时刻之间的绝对差值,位于预设的第一帧时刻阈值和预设的第二帧时刻阈值之间;并且,determining that the absolute difference between the second frame time and the first frame time is between a preset first frame time threshold and a preset second frame time threshold; and,
判定所述第二距离拟合值与所述第一距离拟合值之间的绝对差值,位于两倍的所述第一面积拟合值和三倍的所述第一面积拟合值之间时;determining the absolute difference between the second distance fitting value and the first distance fitting value, which is between twice the first area fitting value and three times the first area fitting value time;
确定在所述预设时间段内发生目标动作。It is determined that the target action occurs within the preset time period.
可选的,确定在所述预设时间段内发生目标动作之后,所述第二确定单元53还用于:Optionally, after determining that the target action occurs within the preset time period, the second determining unit 53 is further configured to:
若所述第二距离拟合值高于所述第一距离拟合值时,则确定所述目标动作为站起;If the second distance fitting value is higher than the first distance fitting value, then determining that the target action is to stand up;
若所述第二距离拟合值低于所述第一距离拟合值时,则确定所述目标动作为坐下。If the second distance fitting value is lower than the first distance fitting value, it is determined that the target action is sitting down.
本发明实施例中,先提取预设时间段内的视频影像,确定视频影像包含的每一帧画面中的人脸的面积拟合值和距离拟合值,获得面积拟合值集合和距离拟合值集合,从距离拟合值集合中,按照预设规则,提取出第一距离拟合值和第二距离拟合值,基于面积拟合值集合,确定提取的第一距离拟合值和第二距离拟合值满足预设条件时,确定在预设时间段内发生目标动作,这样,仅仅只需要检测行为主体的面部,就可实时的完成动作识别,有效避免了因摄像头视线遮挡,而无法确定行为主体四肢和躯干,进而无法进行准确的动作识别的问题,同时,也有效避免了因行为主体的高矮差异以及摄像头远近,进而无法进行准确的动作识别的问题。In the embodiment of the present invention, the video image within the preset time period is first extracted, the area fitting value and the distance fitting value of the face in each frame included in the video image are determined, and the area fitting value set and the distance fitting value set are obtained. The combined value set, from the distance fitted value set, according to the preset rules, extract the first distance fitted value and the second distance fitted value, based on the area fitted value set, determine the extracted first distance fitted value and When the second distance fitting value satisfies the preset condition, it is determined that the target action occurs within the preset time period. In this way, the action recognition can be completed in real time only by detecting the face of the subject, effectively avoiding the occlusion of the camera line of sight. However, it is impossible to determine the limbs and torso of the subject, and thus cannot perform accurate action recognition. At the same time, it also effectively avoids the problem of being unable to perform accurate action recognition due to the difference in height of the subject and the distance of the camera.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。While preferred embodiments of the invention have been described, additional changes and modifications to these embodiments can be made by those skilled in the art once the basic inventive concept is appreciated. Therefore, it is intended that the appended claims be construed to cover the preferred embodiment as well as all changes and modifications which fall within the scope of the invention.
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Apparently, those skilled in the art can make various changes and modifications to the embodiments of the present invention without departing from the spirit and scope of the embodiments of the present invention. In this way, if the modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and equivalent technologies, the present invention also intends to include these modifications and variations.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611160942.1A CN108229251A (en) | 2016-12-15 | 2016-12-15 | A kind of action identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611160942.1A CN108229251A (en) | 2016-12-15 | 2016-12-15 | A kind of action identification method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108229251A true CN108229251A (en) | 2018-06-29 |
Family
ID=62651501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611160942.1A Pending CN108229251A (en) | 2016-12-15 | 2016-12-15 | A kind of action identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229251A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117757A (en) * | 2018-07-27 | 2019-01-01 | 四川大学 | A kind of method of drag-line in extraction Aerial Images |
CN109165578A (en) * | 2018-08-08 | 2019-01-08 | 盎锐(上海)信息科技有限公司 | Expression detection device and data processing method based on filming apparatus |
CN112200828A (en) * | 2020-09-03 | 2021-01-08 | 浙江大华技术股份有限公司 | Detection method and device for ticket evasion behavior and readable storage medium |
CN112614214A (en) * | 2020-12-18 | 2021-04-06 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic device and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
CN114026615A (en) * | 2020-04-01 | 2022-02-08 | 商汤国际私人有限公司 | An image recognition method, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310192A (en) * | 2013-06-06 | 2013-09-18 | 南京邮电大学 | Movement behavior recognition method based on axial acceleration sensor |
CN103327250A (en) * | 2013-06-24 | 2013-09-25 | 深圳锐取信息技术股份有限公司 | Method for controlling camera lens based on pattern recognition |
CN103780837A (en) * | 2014-01-02 | 2014-05-07 | 中安消技术有限公司 | Motion detection and positioning photography method and device thereof |
JP2014147813A (en) * | 2014-04-07 | 2014-08-21 | Copcom Co Ltd | Game apparatus, and game program for realizing the game apparatus |
US20150154449A1 (en) * | 2013-11-29 | 2015-06-04 | Fujitsu Limited | Method and apparatus for recognizing actions |
-
2016
- 2016-12-15 CN CN201611160942.1A patent/CN108229251A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310192A (en) * | 2013-06-06 | 2013-09-18 | 南京邮电大学 | Movement behavior recognition method based on axial acceleration sensor |
CN103327250A (en) * | 2013-06-24 | 2013-09-25 | 深圳锐取信息技术股份有限公司 | Method for controlling camera lens based on pattern recognition |
US20150154449A1 (en) * | 2013-11-29 | 2015-06-04 | Fujitsu Limited | Method and apparatus for recognizing actions |
CN103780837A (en) * | 2014-01-02 | 2014-05-07 | 中安消技术有限公司 | Motion detection and positioning photography method and device thereof |
JP2014147813A (en) * | 2014-04-07 | 2014-08-21 | Copcom Co Ltd | Game apparatus, and game program for realizing the game apparatus |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117757A (en) * | 2018-07-27 | 2019-01-01 | 四川大学 | A kind of method of drag-line in extraction Aerial Images |
CN109117757B (en) * | 2018-07-27 | 2022-02-22 | 四川大学 | A method for extracting cables in aerial images |
CN109165578A (en) * | 2018-08-08 | 2019-01-08 | 盎锐(上海)信息科技有限公司 | Expression detection device and data processing method based on filming apparatus |
CN114026615A (en) * | 2020-04-01 | 2022-02-08 | 商汤国际私人有限公司 | An image recognition method, device and storage medium |
CN112200828A (en) * | 2020-09-03 | 2021-01-08 | 浙江大华技术股份有限公司 | Detection method and device for ticket evasion behavior and readable storage medium |
CN112614214A (en) * | 2020-12-18 | 2021-04-06 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic device and storage medium |
CN112614214B (en) * | 2020-12-18 | 2023-10-27 | 北京达佳互联信息技术有限公司 | Motion capture method, motion capture device, electronic equipment and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229251A (en) | A kind of action identification method and device | |
CN110287923B (en) | Human body posture acquisition method, device, computer equipment and storage medium | |
WO2018188453A1 (en) | Method for determining human face area, storage medium, and computer device | |
US11398049B2 (en) | Object tracking device, object tracking method, and object tracking program | |
US8706663B2 (en) | Detection of people in real world videos and images | |
JP6024658B2 (en) | Object detection apparatus, object detection method, and program | |
US20160232399A1 (en) | System and method of detecting a gaze of a viewer | |
US20170236304A1 (en) | System and method for detecting a gaze of a viewer | |
JP6280020B2 (en) | Moving object tracking device | |
CN111243229A (en) | A fall risk assessment method and system for the elderly | |
CN110163046B (en) | Human body posture recognition method, device, server and storage medium | |
US10496874B2 (en) | Facial detection device, facial detection system provided with same, and facial detection method | |
CN110910449B (en) | Method and system for identifying three-dimensional position of object | |
EP3757878A1 (en) | Head pose estimation | |
EP3699865B1 (en) | Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium | |
CN111797652B (en) | Object tracking method, device and storage medium | |
CN112446254A (en) | Face tracking method and related device | |
Štrbac et al. | Kinect in neurorehabilitation: computer vision system for real time hand and object detection and distance estimation | |
WO2022041953A1 (en) | Behavior recognition method and apparatus, and storage medium | |
Worrakulpanit et al. | Human fall detection using standard deviation of C-motion method | |
US11482031B2 (en) | System and method for detecting potentially dangerous human posture | |
Nguyen et al. | Real-time human tracker based on location and motion recognition of user for smart home | |
EP4300446A1 (en) | Methods and systems for detecting fraud during biometric identity verification | |
US20240260854A1 (en) | Physical-ability estimation system, physical-ability estimation method, and recording medium | |
Mikrut et al. | Combining pattern matching and optical flow methods in home care vision system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |
|
RJ01 | Rejection of invention patent application after publication |