Place: Large Lecture Room
Affiliation: Universidad de Zaragoza
It is well known that, using only the image sequence gathered by a moving camera that is observing a scene, both the camera trajectory and the 3D structure of the scene can be estimated. In robotics, the previous problem is named by the acronym SLAM (Simultaneous sensor Location And Mapping), it is one of the most researched topics in robotics because it embodies one of the basic perception abilities of a mobile robot. The talk focuses on SLAM sequential estimation in real time at frame rate using visual sensors.
The focus of the talk is a review of the SLAM methods that use computer vision sensors as the input. First, the real time scene geometry recovery is considered to conclude that state of the art methods are quite efficient providing the camera location but at the expense of neglecting the estimation of a rich description of the scene. Next, the focus is directed to the insertion of recognized objects in the estimated map, in order to increase the map semantic content. Finaly the applicaton of SLAM to medical encoscope sequences and to non-rigid scenes is considered.