The" something something" video database for learning and evaluating visual common sense
R Goyal, S Ebrahimi Kahou… - Proceedings of the …, 2017 - openaccess.thecvf.com
Proceedings of the IEEE international conference on computer …, 2017•openaccess.thecvf.com
Neural networks trained on datasets such as ImageNet have led to major advances in visual
object classification. One obstacle that prevents networks from reasoning more deeply about
complex scenes and situations, and from integrating visual knowledge with natural
language, like humans do, is their lack of common sense knowledge about the physical
world. Videos, unlike still images, contain a wealth of detailed information about the physical
world. However, most labelled video datasets represent high-level concepts rather than …
object classification. One obstacle that prevents networks from reasoning more deeply about
complex scenes and situations, and from integrating visual knowledge with natural
language, like humans do, is their lack of common sense knowledge about the physical
world. Videos, unlike still images, contain a wealth of detailed information about the physical
world. However, most labelled video datasets represent high-level concepts rather than …
Abstract
Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the" something-something" database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale.
openaccess.thecvf.com