Actom Sequence Models for Efficient Action Detection

Actom Sequence Models for Efficient Action Detection

Place: Large Lecture Room - CVC

Affiliation: MSR-INRIA, (LEAR team)  

This talk will address the problem of detecting actions, such as drinking or opening a door, in hours of challenging video data.  We propose a model based on a sequence of atomic action units, termed ``actoms'', that are characteristic for the action.  Our model represents the temporal structure of actions as a sequence of histograms of actom-anchored visual features. Our representation, which can be seen as a temporally structured extension of the bag-of-features, is flexible, sparse and discriminative.  We refer to our model as Actom Sequence Model (ASM). Training requires the annotation of actoms for action clips.  At test time, actoms are detected automatically, based on a non parametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure.   We will present experimental results on two recent benchmarks for temporal action detection and show that our ASM method outperforms the current state of the art in temporal action detection.