Developing autonomous systems able to help humans in their daily activities requires understanding different kinds of environments depending on the specific application, e.g., self-driving cars, unmanned aerial vehicles and service or underwater robots.
For computer vision researchers understanding the world from visual data has been one of the big challenges since the beginning of the community. This research area has recently become (again) a focus in the community leveraging advances in a wide range of topics that include scene parsing, mapping, detection and reconstruction, among others. Nevertheless, there is still a long path until we have more complex approaches capable of a detailed understanding of the environments that surround us.
Accordingly, the SUAS workshop aims to summarize the advances in this area through invited talks of experienced researchers and poster presentations of young active researchers. Also, the goal of this event is to have the opportunity to discuss and debate about the advantages and drawbacks of current visual scene understanding approaches and the possible future paths for researching in this line.
We invite the submission of research contributions in the following topics:
- 2D/3D Object detection and recognition
- 2D/3D Semantic segmentation
- 3D Reconstruction and reasoning
- Learning and inference techniques for scene understanding
- Motion and tracking
- Scene recognition
- Semantic mapping
- Vision-based exploration, planning and navigation
- Visual simultaneous localization and mapping
This is not a closed list; thus, we welcome other interesting and relevant research on scene understanding for autonomous systems.
This workshop is supported by the Spanish projects: TRA2011-29454-C03-01 and TIN2011-29494-C03-02. With the support of the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Generalitat of Catalonia (2014-SGR-1506).