The car in the matrix: CARLA
CARLA (Car Learning to Act) is an open-source simulator desgined within Academia as an autonomous driving research tool. Developed by the Computer Vision Center, along with Intel Labs and the Toyota Research Institute, it is a platform in which to support the development, training and validation of autonomous urban driving systems. CARLA was presented at the First Conference in Robot Learning at Mountain View, CA by CVC/UAB PhD candidate Felipe Codevilla.
Training an autonomous car to drive is a challenge that is being tackled in research all over the world. Cars are performing simple driving tasks on real, actual roads. However, teaching these cars to drive with zero incidents and in the most varied scenarios as possible isn’t trivial. There are plenty of rare and odd situations of which one sole car might never encounter and it needs to know how to react real-time.
“Imagine a child running towards the road, or a very dusty evening with the sun lying low and frontally into the car’s cameras,” explains Felipe Codevilla, co-author of the paper ‘CARLA: An open urban driving simulator’. “You expect the car to be able to respond to these situations, but you need to have trained it first”. CARLA enables researchers to trigger the different, unexpected situations a car might come up against. As added by Dr. Antonio López, head of the ADAS team at CVC and also co-author of the paper. “CARLA allows us to drive in different environments, lighting conditions, weather changes or urban scenarios”.
The physical world represents clear difficulties for autonomous driving research, not only infrastructure costs and logistic difficulties, but funds and manpower involved are high and costly. Furthermore, a single vehicle is far from sufficient for collecting the requisite data that cover the multitude of corner cases that need to be processed for both training and validations. CARLA has been developed to overcome such challenges and give researchers a new, open source, research-oriented platform.
Although the use of simulators for autonomous driving is not new, and videogame technology has been used to train autonomous cars in the past, existing simulation platforms are limited, lacking numerous basic elements such as pedestrians, traffic rules, intersections, or other complications that may arise constantly in real life driving.
Commercial videogames, such as The Grand Theft Auto have also been tested in autonomous driving research, but the privileged information that the car needs to comprehend its environment remains unavailable in videogames due to their commercial nature. CARLA, being built from zero for autonomous driving research purposes, gives the car access to privileged information such as GPS coordinates, speed, acceleration and detailed data on a number of infractions.
The sensors that the virtual car has within the simulator are composed by RGB cameras and by pseudo-sensors that provide ground-truth depth and semantic segmentation. Camera parameters include 3D location, 3D orientation with respect to the car’s coordinate system, field of view and depth of field. The semantic segmentation sensor provides a total of 12 semantic classes: road, lane-marking, traffic sign, sidewalk, fence, pole, wall, building, vegetation, vehicle, pedestrian and other.
The simulator not only recreates a dynamic world but also provides a simple interface between this world and the agent that interacts with it. The platform has a highly realistic environment and enables users to use a set of sensors to guide the car. By not using metric maps, visual perception becomes a crucial asset for the vehicle.
The authors carried out three approaches when testing autonomous driving in CARLA: Firstly, a classic modular pipeline; secondly, an end-to-end model trained via imitation learning, and finally, an end-to-end model trained via reinforcement learning. The first approach, a classic modular pipeline, structured the driving task into three subsystems: perception, planning and continuous control.
In the second approach, the imitation learning in an end-to-end model, researchers used a dataset of driving traces recorded by human drivers, collecting a total of 14 hours of driving data for training. The third and last approach, the reinforcement learning model, trained the deep network based on a reward signal provided by the environment, with no human traces. Conclusions were that the performance of two of the systems (modular pipeline vs the imitation learning approach) was very close under most of the testing conditions, differing by less than 10%.
When performance was compared between the imitation learning and reinforcement learning models, they realised that the agent instructed with reinforcement learning performed significantly worse than the one trained by human imitation. The model based on reinforcement learning was prepared using a significantly larger amount of data; thus, the results suggested that an out-of-the-box reinforcement learning algorithm is not sufficient for the driving task and more research needs to be developed further within this line of study.
“Performance isn’t optimal in any of the tested methods” states Felipe Codevilla when asked for a conclusion. Results showed that giving cars new environments and a set of situations they hadn’t encountered in previous training poses still a serious challenge. Experts now expect that CARLA, being open source, will enable a broad community to actively engage in autonomous driving research.
More information at Carla.org.
A. Dosovitskiy, G. Ros, F. Codevilla, A. López, V. Koltun (2017): CARLA: An Open Urban Driving Simulator
Towards A No Driver Scenario: Autonomous And Connected Cars At The Computer Vision Center
The Future Of Autonomous Cars: Understanding The City With The Use Of Videogames
Compitiendo Contra La Inteligencia Artificial Del Coche Autónomo
Boosting entrepreneur projects within Smart mobility
In order to highlight the key technologies and thus respond to the challenges of smart mobility and promote entrepreneurship, the UAB Research Park (PRUAB), the Computer Vision Center and the UAB School of Engineering, have organized a course dedicated to smart mobility and enterpreneurship, the “Intelligent Vehicle and Business Opportunities” program, with the direct support of the Department of Enterprise and Occupation of the Generalitat of Catalonia (Departament d’empresa i ocupació de la Generalitat de Catalunya), through the Catalunya Emprèn Program.
The course’s Demoday took place on the 27th of June, and the whole program counted with the participation of 38 students with different profiles, such as researchers, entrepreneurs, engineering students, designers and various professionals. During six months, the students have received training on emerging technologies in the area of smart transport and business management and new business models. Concurrently, they have worked in their own project, designing new solutions accompanied by technology and business experts, who have been giving them support in order to bring their idea closer to the market.
Innovative projects for sustainable mobility
The six projects developed throughout the Program were presented on Tuesday 27th of June at the Eureka building (UAB Research Park) in a Demoday and evaluated by a specialized jury: Felipe Jiménez, Director of the Intelligent System Unit at the Institute of Automobile Research of the Polytechnic University of Madrid; José Manuel Barrios, head of the department of Innovation of the company Applus+ IDIADA; Eduardo Urruticoechea, Director of Innovation at IDNEO, Antoni Espinosa, associate professor at the UAB and researcher in the field of the autonomous vehicle, and Meritxell Bassolas, head of Technological Transfer at the Computer Vision Center.
The selected project and winner of the event was CheckUp, mechanism for the automatic diagnosis of damage in rented cars. Currently, car rental companies stil rely on the visual perception of their staff for the vehicle inspection, registering the damage manually and on paper. The solution proposed by CheckUp is based on automating and accelerating this process through computer vision in order to maintain the customer’s confidence and the impartiality of the company.
This project has obtained a prize of 1,000 euros, sponsored by the accelerator mVenturesBCN.
The other developed projects are:
Apparka: a web and mobile platform for the management of shared electric bicycles, which allows users to find, rent and unlock a bicycle through a mobile application. Furthermore, the application analyses and manages the data registered in the routes in real time, such as distance, time, consumed calories or the state of the battery of the bicycle, proposing the users the optimal route. This project is born to offer a new concept of mobility, being a great complement for public transport and replacing private transport.
Accsint: a system for helping the driving of electric wheelchairs for people with reduced mobility. The mechanism incorporates sensors and satellite connectivity to detect obstacles, slopes, holes, etc. and notify emergency services in case of an accident.
Urban Charge intends to make private electric vehicle recharging points profitable during the time frames in which they are empty. It is a collaborative platform that connects users seeking for parking and recharging their electric vehicle in available private charge points.
Drivvisor focuses on the driver and proposes a monitoring system in order to evaluate its state during driving. Their idea is to develop a mobile application based on computer vision that detects the tiredness and the distractions of the driver, issuing alerts in real time.
Finally, Smart Clean Technologies proposes to use autonomous vehicles capable of cleaning large surfaces autonomously. Its innovative characteristic is that the robot knows in every moment where is it located and thus reaching all points. Moreover, the system is flexible and non intrusive.
Although the course has reached its end, the projects will continue receiving support from the UAB Research Park in order to effectively reach market. Most of the projects have already developed a fist prototype and have attracted the interest of different institutions.
You’ll find all the Demoday pictures here: https://www.flickr.com/photos/pruab/sets/72157685506538076/with/35585348325/