Place: CVC Sala d’actes
Dr. Arturo de la Escalera Hueso – Dept. Ingeniería de Sistemas y Automática, Universidad Carlos III de Madrid
Dr. Aura Hernández Sabaté – Computer Vision Center / Dep. de Ciències de la Computació, Universitat Autònoma de Barcelona
Dr. Onay Urfalioglu – Dept. Automotive Engineering Lab, Huawei Munich Research Center
Dr. Antonio M. López – Computer Vision Center/ Dep. de Ciències de la Computació, Universitat Autònoma de Barcelona
Dr. David Vázquez Bermúdez – Element AI, Montreal, Canada
Abstract: Anticipating the intentions of vulnerable road users (VRUs) such as pedestrians and cyclists can be critical for performing safe and comfortable driving maneuvers. This is the case for human driving and, therefore, should be taken into account by systems providing any level of driving assistance, i.e. from advanced driver assistant systems (ADAS) to fully autonomous vehicles (AVs). In this PhD work, we show how the latest advances on monocular vision-based human pose estimation, i.e. those relying on deep Convolutional Neural Networks (CNNs), enable to recognize the intentions of such VRUs. In the case of cyclists, we assume that they follow the established traffic codes to indicate future left/right turns and stop maneuvers with arm signals. In the case of pedestrians, no indications can be assumed a priori. Instead, we hypothesize that the walking pattern of a pedestrian can allow us to determine if he/she has the intention of crossing the road in the path of the ego-vehicle, so that the ego-vehicle must maneuver accordingly (e.g. slowing down or stopping). In this PhD work, we show how the same methodology can be used for recognizing pedestrians and cyclists’ intentions. For pedestrians, we perform experiments on the publicly available Daimler and JAAD datasets. For cyclists, we did not found an analogous dataset, therefore, we created our own one by acquiring and annotating corresponding video-sequences which we aim to share with the research community. Overall, the proposed pipeline provides new state-of-the-art results on the intention recognition of VRUs.