Place: Large Lecture Room, Computer Vision Center
Affiliation: Computer Vision Center / Univ. Autonoma de Barcelona, Spain
Recovering human motion by visual analysis is a challenging computer vision research area with a lot of potential applications. Model-based tracking approaches, and in particular particle filters, formulate the problem as a Bayesian inference task whose aim is to sequentially estimate the distribution of the parameters of a human body model over time.
The work presented in this Thesis is all part of the efforts for tracking a full human body 3D model from a monocular image sequence. The key idea is to learn a series of human motion priors for a predefined set of actions, and then, use them as a priori knowledge in a particle filter tracking framework to recover the approximate body motion from noisy and incomplete image measurements.
Towards this end, a probabilistic action model is learnt from several databases of real motion-captured action performances. The model captures the variability and temporal evolution of full-body motion and guides the particle set according to similar situations previously learnt constraining the solution space to feasible human postures. As a result, the state space is explored more efficiently and results point out that the presented tracking scheme is able to estimate the coarse 3D configuration of a full-body model providing only the 2D positions of a reduced set of joints.