[go: up one dir, main page]

loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Fabio Martinez 1 ; Antoine Manzanera 2 ; Michèle Gouiffès 3 and Thanh Phuong Nguyen 4

Affiliations: 1 LIMSI-CNRS, Université Paris-Saclay, U2IS/Robotics-Vision, ENSTA-ParisTech and Université Paris-Saclay, France ; 2 U2IS/Robotics-Vision, ENSTA-ParisTech and Université Paris-Saclay, France ; 3 LIMSI-CNRS and Université Paris-Saclay, France ; 4 LSIS, UMR 7296 and Université du Sud Toulon Var, France

Keyword(s): Action Recognition, Semi Dense Trajectories, Motion Shape Context, On-line Action Descriptors.

Abstract: This work introduces a novel action descriptor that represents activities instantaneously in each frame of a video sequence for action recognition. The proposed approach first characterizes the video by computing kinematic primitives along trajectories obtained by semi-dense point tracking in the video. Then, a frame level characterization is achieved by computing a spatial action-centric polar representation from the computed trajectories. This representation aims at quantifying the image space and grouping the trajectories within radial and angular regions. Motion histograms are then temporally aggregated in each region to form a kinematic signature from the current trajectories. Histograms with several time depths can be computed to obtain different motion characterization versions. These motion histograms are updated at each time, to reflect the kinematic trend of trajectories in each region. The action descriptor is then defined as the collection of motion histograms fr om all the regions in a specific frame. Classic support vector machine (SVM) models are used to carry out the classification according to each time depth. The proposed approach is easy to implement, very fast and the representation is consistent to code a broad variety of actions thanks to a multi-level representation of motion primitives. The proposed approach was evaluated on different public action datasets showing competitive results (94% and 88:7% of accuracy are achieved in KTH and UT datasets, respectively), and an efficient computation time. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 142.171.178.55

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Martinez, F.; Manzanera, A.; Gouiffès, M. and Nguyen, T. (2016). Action-centric Polar Representation of Motion Trajectories for Online Action Recognition. In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) - Volume 4: VISAPP; ISBN 978-989-758-175-5; ISSN 2184-4321, SciTePress, pages 442-448. DOI: 10.5220/0005730404420448

@conference{visapp16,
author={Fabio Martinez. and Antoine Manzanera. and Michèle Gouiffès. and Thanh Phuong Nguyen.},
title={Action-centric Polar Representation of Motion Trajectories for Online Action Recognition},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) - Volume 4: VISAPP},
year={2016},
pages={442-448},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005730404420448},
isbn={978-989-758-175-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) - Volume 4: VISAPP
TI - Action-centric Polar Representation of Motion Trajectories for Online Action Recognition
SN - 978-989-758-175-5
IS - 2184-4321
AU - Martinez, F.
AU - Manzanera, A.
AU - Gouiffès, M.
AU - Nguyen, T.
PY - 2016
SP - 442
EP - 448
DO - 10.5220/0005730404420448
PB - SciTePress