Towards a no driver scenario: autonomous and connected cars at the Computer Vision Center

Towards a no driver scenario: autonomous and connected cars at the Computer Vision Center

70% of the population is expected to live in cities by 2050, therefore intelligent mobility is now, more than ever, a pressing topic. Within this context, a project lead by CVC researchers Dr. Antonio López and Dr. David Vázquez, is accomplishing positive results within their platform, Elektra, where they develop research within the area of computer vision and deep learning to advance in autonomous driving in city scenarios.

As Dr. López explains, the European Commission considers autonomous driving as one of the top ten technologies that will drastically change citizen’s life. Not only will it reduce accidents but will help include citizens with low physical mobility, make transport more efficient and therefore lower our carbon /petrol dependence. “Elektra is born as an autonomous driving platform designed in the context of our project ACDC”, as stated by Dr. Antonio López. The ACDC project (Automatic and Collaborative Driving in the City) points its research in computer vision for ADAS towards a level 5 automation, (the scale being from 1 to 5, and 5 meaning no driver when driving in the city). A most challenging project indeed, only possible with a clear synergy of different research groups and enterprises. Elektra is formed by more than 20 professionals from different backgrounds, all summing up to the project. Within the research groups we find the CVC ADAS (Automatic Driving Assistance Systems) group, the CAOS (Computer architecture & Operating Systems) group at the UAB (Universitat Autonoma de Barcelona); the Research Center of Supervision, Safety and Automatic Control at UPC (Universitat Politècnica de Catalunya); the CTTC (Telecommunications Technical Center of Catalonia) and the IEEC (Institute of Space Studies of Catalonia) as well as the UAB-DEIC-Senda (Department of Information and Communications Engineering - Security of Networks and Distributed Applications) research group and the UAB-CEPHIS (Center of Prototypes and solutions Hardware-Software) team. Within businesses, Elektra has the input of the know-how of CT Ingenieros, a Barcelona based company dedicated to innovation within engineering in different infrastructure sectors. The project, with an electric prototype, is highly relying on computer vision techniques for perception (stereo, stixels, obstacle detection, scene understanding) which tend to be computationally demanding, localization (GPS + IMU and vision) and navigation (Control and Planning). With this, the group has arrived to their first milestone, which is to “move autonomously from a starting point to a final point in a comfortable way and controlling that the trajectory always corresponds to free-navigable space and without disturbing pedestrians”. Cameras, being passive sensors, have been preferred for the AI community when regarding driving. “Images give us a high amount of information to drive”, as explained by Dr. López, “after all, this is how humans do it. We therefore, needed to give the car the ability to interpret the information around him”. That includes pedestrian (obstacle) detection, free navigable space detection, localization and route planning. “Of course, cameras aren’t as precise as a human eye. Autonomous driving can’t only be possible with the use of computer vision, but it is a great ally”, as clarified by Dr. López. As Dr. Vázquez sees it: “In order to have a car that can drive you need several things. Firstly, accurate pedestrian (obstacle) detection, in which CVC is definitely pioneer. Secondly, free navigable space detection, which is no more than detecting the lane without obstacles or interferences. Thirdly, localization. The car needs to know where it is at and where it is going towards. Fourthly, planning. The car has to plan its way from point A to point B in the smoothest way possible and thus define a global trajectory. And last but not least, control: to execute the motion plan performing the necessary manoeuvres”. But cameras aren’t the only way to grant the car with perceptive abilities. Other technology used is based in sensors such as Lidar and Radar, involving raw data which has a more direct interpretation and thus provides an accurate distance estimation in different environmental conditions. The problems here are costs, these sensors being highly expensive, especially when compared to cameras, and the poor resolution of the data as well as the lack of details when capturing the world’s appearance. Visual data is comparatively far much richer in complexity and detail, the challenge being not only to give cars the ability to see and interpret, but to be able to take decisions when faced with different circumstances. Related article: The future of autonomous cars: Understanding the city with the use of videogames. More information about the project at the Elektra website: http://adas.cvc.uab.es/elektra/ https://www.youtube.com/watch?v=ndy5mjz8lwY