Simulation and Autonomous Driving in the Deep Learning Era


CARLA is an open-source simulator for autonomous driving research. It has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to-end model trained via imitation learning, and an end-to-end model trained via reinforcement learning. This project has been developed along with Intel Labs and the Toyota Research Institute.
Download the paper



The SYNTHetic Collection of Imagery and Annotations is a dataset that has been generated with the purpose of aiding semantic segmentation and related scene understanding problems in the context of driving scenarios. It consists of a photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations for several classes such as: sky, building, fence, vegetation, pole, car, traffic, sidewalk, fence, vegetation, pole, car, traffic sign, pedestrian, traffic light, etc.
Download the paper



Virtual Worlds, Domain Adaptation


ACDC project is the first step to develop an intelligent system for self-driving cars that belong to a coordinated fleet in the city.  It focuses on developing advanced software for processing data coming from relatively cheap sensors. Therefore, its long term objective is to develop intelligent software able to analyse raw data from sensors to achieve a high degree of environmental perception, as well as the software required to allow vehicles to perform cooperative manoeuvres by taking into account not only the own perception but also information coming from other vehicles and/or the city infrastructure.
Funding: MINECO



The aim of this project is to research technologies for bringing ADAS to urban oriented electric vehicles. Two are the major distinctive features of our proposal: the use of vision as “eco”-sensor; and to follow a driver-centric approach, i.e., rather than thinking in road and driver monitoring as working-alone ADAS, we will make them to cooperate in order to assist the driver only when he/she really need it, or in other words, working as actual co-drivers. Both things together build our concept of eco-driver. We organize the overall coordinated project “eCo-DRIVERS: Ecologic Cooperative Driver and Road Intelligent Visual Exploration for Route Safety” as three complementary and collaborative subprojects: “Vision-based Driver Assistance Systems for Urban Environments (ViDAS-UrbE)”; “Driver Distraction Detection System (D3System)”; and “Intelligent Agent-based Driver Decision Support (i-Support)”.
Funding: Ministerio de Ciencia e Innovación (MICINN)



Vision-based ADAS (advanced driver assistance systems)


Elektra is an autonomous vehicle project developed by CVC, UAB and UPC. It uses computer vision techniques for perception (stereo, stixels, pedestrian detectionn) and localization (GPS+IMU, and vision). The electric car Tazzari Zero is equipped for environment perception with a stereo rig, GPU, IMU, computers, GPUs and adapted for automated driving, i.e., through CAN bus we can read the vehicle state (speed, steering wheel angle, etc.), and brake, accelerate and moving the steering wheel among others.
Funding: Spanish project TRA2014-57088-C2-1-R and NVIDIA



Pedestrian detectors PD always has to face the dilemma among producing too many false alarms or not detecting all the pedestrians, when some of them could be in danger. The project MAPEA2 aims to create “risk maps” relating the hits. By knowing the usual movements of pedestrians on different areas the PD could use this information to take better decisions.
Funding: Ministerio del Interior



This project is cenred on a very suitable future transportation vehicle: city-oriented electric cars. The novelty of our approach relied on using a shadow invariant feature space combined with a model–based classifier The model was built on-line to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm worked in still images and did not depend either on road shape or temporal restrictions.
Funding: Ministerio del Ciencia e Innovación (MICINN)


Vehicle Detection

Determining the position of other vehicles on the road is a key information to help driver assistance systems to increase the safety of the driver. Accordingly, our work addresses the problem of detecting the vehicles in front of our own one and estimating their 3D position by using a single monochrome camera. Rather than using predefined high level image features as symmetry, shadow search, etc., our proposal for the vehicle detection is based on a learning process that determines, from a training set, which are the best features to distinguish vehicles from non-vehicles.
Funding: Volkswagen AG, SEAT


Headlight Control

We developed a night-time vehicle detection system whose core was a novel classifier-based module which can label each detected target as vehicle or non-vehicle. However, in general it was unrealistic to assume a classifier, or a set of them, providing the perfect detection rate and no false alarms. Therefore, we proposed to explore the temporal coherence of the targets classification. The system worked in night-time under wet and dry conditions.
Funding: Volkswagen AG


Being On-Lane: Lane markings detection system

Detection of lane markings based on a camera sensor can be a low cost solution to lane departure warning and lateral control. However, reliable detection is difficult due to cast shadows, vehicles occluding the marks, wear, vehicle motion, etc. In our work we proposed to use the ridgeness as a low-level descriptor to detect and characterize the lane markings. RANSAC was used to fit a parametric model to the curvature of the lanes in the image. Lane markings resembled mountains, and their ridge corresponded to the center of the lane markings.
Funding: Volkswagen AG, SEAT


Computer vision based detection and tracking of vehicles and pedestrians for ADAS

In this project we went one step further from the 2004-2007 project by not only improving the solutions for lane markings and pedestrian/vehicle detection, but also research on other topics such as road segmentation and crowd detection. These functionalities were the base for ADAS applications like adaptive cruise control, lane/road departure warning, lane/road keeping and pedestrian protection systems. From the scientific point of view, the research addressed the computer vision topics of multi-class classifiers (feature selection and machine learning), multi-target tracking, color and texture analysis, illumination invariant images, movement analysis (ego-motion, gait pattern), robust model fitting and stereo analysis.
Funding: Ministerio de Educación y Ciencia (MEC)


Computer Vision Detection and Tracking of Vehicles and Pedestrians. Validation on an Intelligent Vehicle Prototype

The aim of this project was to devise new machine vision and pattern recognition techniques able to solve the following technological problems: detection and tracking of vehicles with a monocular system, at day and night, pedestrian detection with a stereovision system and deformable templates, lane markings detection in curves, from parametric robust fitting techniques, and Computation of geometrical measures from images: distance to obstacles and curvature of the actual lane.
Funding: Ministerio de Educación y Ciencia (MEC)



PhD Work and Quality Control

(Under reconstruction)