CARLA (Car Learning to Act) is an open-source simulator for autonomous driving research. It has been developed by the Computer Vision Center, along with Intel Labs and the Toyota Research Institute and it is a platform in which to support development, training and validation of autonomous urban driving systems. CARLA was presented at the First Conference in Robot Learning at Mountain View, CA by CVC/UAB PhD candidate Felipe Codevilla.
An autonomous car can’t just be left out into the wild and expect it to learn. Not only because of the obvious damage that this would cause, but because there are plenty of rare and odd situations of which one sole car might never encounter, but needs to know how to react real-time.
“Imagine a child running towards the road, or a very dusty evening with the sun lying low and frontally into the car’s cameras,” explains Felipe Codevilla, co-author of the paper ‘CARLA: An open urban driving simulator’. “You expect the car to be able to respond to these situations, but you need to have trained it first”. CARLA enables researchers to trigger the different, unexpected situations a car might come up against. As added by Dr. Antonio López, head of the ADAS team at CVC and also co-author of the paper. “CARLA allows us to drive in different environments, lighting conditions, weather changes or urban scenarios”.
The physical world represents clear difficulties for autonomous driving research, not only infrastructure costs and logistical difficulties, but funds and manpower involved are high and costly. Furthermore, as quoted from the paper “a single vehicle is far from sufficient for collecting the requisite data that cover the multitude of corner cases that must be processed for both training and validations”.
The use of simulators for autonomous driving is not new. As stated by Dr. Antonio López, videogame technology has been used to train autonomous cars, but existing simulation platforms are limited, lacking numerous basic elements, such as pedestrians, traffic rules, intersections, or other complications that arise constantly in real life driving.
Commercial videogames, such as The Grand Theft Auto have also been used in autonomous driving research, but the privileged information that the car needs to comprehend its environment is unavailable in videogames due to their commercial nature. CARLA, being built for autonomous driving research purposes, gives the car access to privileged information: GPS coordinates, speed, acceleration and detailed data on a number of infractions.
The simulator not only recreates a dynamic world but also provides a simple interface between the world and an agent that interacts with it. The platform has a highly realistic environment and enables users to use a set of sensors to guide the car. By not using metric maps, visual perception becomes a critical task for the vehicle.
The authors carried out three approaches to test autonomous driving in CARLA: firstly, a classic modular pipeline; secondly, an end-to-end model trained via imitation learning, and finally, an end-to-end model trained via reinforcement learning. The first approach, a classic modular pipeline, decomposed the driving task into three subsystems: perception, planning and continuous control.
In the second approach, the imitation learning in an end-to-end model, researchers used a dataset of driving traces recorded by human drivers, collecting a total of 14 hours of driving data for training. The last approach, the reinforcement learning model, trained the deep network based on a reward signal provided by the environment, with no human traces. Conclusions were that the performance of two of the systems, modular pipeline vs the imitation learning approach, was very close under most of the testing conditions, differing by less than 10%.
When performance was compared between imitation learning and reinforcement learning, they found out that the agent trained with reinforcement learning performed significantly worse than the one trained with imitation learning. The model based on reinforcement learning was trained using a significantly larger amount of data; thus, the results suggest that an out-of-the-box reinforcement learning algorithm is not sufficient for the driving task and more research needs to be done in this line.
Overall, researchers found out that the performance of all methods isn’t perfect, not even on the simplest task of driving in a straight line. Furthermore, results showed that giving cars new environments, situations in which they haven’t been trained on before, poses a clearly serious challenge. Researchers expect that CARLA, being open source, will foster autonomous driving research and, as stated by themselves “hope that CARLA will enable a broad community to actively engage in autonomous driving research”.
Further information and accompanying assets can be found at Carla.org.