Meta-RL for Visual Semantic Navigation

CVC Seminar

Abstract:

Semantic and goal-oriented visual navigation is one of the most prominent tasks performed by intelligent species in their daily lives. This task is defined as the ability we have to navigate through our environment, finding targets and enabling interaction with it. Navigation methods used in robotics can be divided into two main categories: geometry-based and learning-based. Semantic visual navigation belongs to the second group, where there is no need to know the map of the environment a priori, and where there is no need to build such a map "on the fly". In this talk we will present the problem of semantic visual navigation in detail, reviewing the state-of-the-art models. We will finally describe the main advances we have made on this problem from a reinforcement learning and meta-learning approach.

Short bio:

Roberto Javier López-Sastre received a Master of Electrical Engineering from the University of Alcalá, Spain in 2005. He worked at GRAM research group within the Department of Signal Theory and Communications, where he defended his PhD in Electrical Engineering on may 18, 2010, entitled "Visual Vocabularies for Category-Level Object Recognition". In 2008, he spent 6 months in lovely Leuven, working with Tinne Tuytelaars and the VISICS-PSI research group. Summer 2010, he visited Silvio Savarese's group at University of Michigan. Research interests in reverse chronological order: meta-learning, semantic visual navigation, AI applied to astrophysics, unsupervised learning, activity recognition, object category pose estimation, semantic visual vocabularies, category-level object recognition.
More details