Analysis of Head-Pose Invariant, Natural Light Gaze Estimation Methods

September 20, 2017 at 4:00 pm by

Place: CVC Sala d’actes
Committee:

Dr. Arantzazu Villanueva – Public University of Navarre. Electrical and Electronics Engineering Department

Dr. Dimosthenis Karatzas – Universitat Autònoma de Barcelona. Dept. Ciències de la Computació

Dr. Christian Frisson – Inria, Lille Nord-Europe Center. Mjolnir Team

Thesis Supervisor:
Dr. Fernando Vilariño – Dept. Ciències de la Computació & Centre de Visió per Computador

 

Abstract:
Eye tracker devices have traditionally been only used inside laboratories, requiring trained professionals and elaborate setup mechanisms. However, in the recent years the scientific work on easier–to–use eye trackers which require no special hardware—other than the omnipresent front facing cameras in computers, tablets, and mobiles—is aiming at making this technology common–place. These types of trackers have several extra challenges that make the problem harder, such as low resolution images provided by a regular webcam, the changing ambient lighting conditions, personal appearance differences, changes in head pose, and so on. Recent research in the field has focused on all these challenges in order to provide better gaze estimation performances in a real world setup.
In this work, we aim at tackling the gaze tracking problem in a single camera setup. We first analyze all the previous work in the field, identifying the strengths and weaknesses of each tried idea. We start our work on the gaze tracker with an appearance–based gaze estimation method, which is the simplest idea that creates a direct mapping between a rectangular image patch extracted around the eye in a camera image, and the gaze point (or gaze direction). Here, we do an extensive analysis of the factors that affect the performance of this tracker in several experimental setups, in order to address these problems in future works. In the second part of our work, we propose a feature–based gaze estimation method, which encodes the eye region image into a compact representation. We argue that this type of representation is better suited to dealing with head pose and lighting condition changes, as it both reduces the dimensionality of the input (i.e. eye image) and breaks the direct connection between image pixel intensities and the gaze estimation. Lastly, we use a face alignment algorithm to have robust face pose estimation, using a 3D model customized to the subject using the tracker. We combine this with a convolutional neural network trained on a large dataset of images to build a face pose invariant gaze tracker.

 

Watch the video presentation

%d bloggers like this: