Autonomous driving at San Francisco’s steepest streets

Autonomous driving at San Francisco’s steepest streets

Researchers from the Computer Vision Center, Daimler and the Autonomous University of Barcelona have found a way to teach autonomous cars to successfully drive in steep city streets by reducing significant computational cost with a novel stixel depth model.

When faced with a steep street, an autonomous car will identify a wall or a vertical obstacle that’ll make it stop on the spot. Now, a team of computer vision scientists and computer engineers have used slanted stixels to teach cars that steep slopes are, in fact, drivable lanes and have proposed a new algorithm which uses an over-segmentation of ’stixel cuts’ to speed up the process enabling real-time performance. Research is exposed in their paper ‘Slanted Stixels: Representing San Francisco’s Steepest Streets’, which was presented as an oral presentation at the BMVC (British Machine Vision Conference) in London on the 5th of September 2017 and won the Best Industry paper award of the Conference. “Our work yields an improved Stixel representation that accounts for non-flat roads, outperforming the original Stixel model in this context while keeping the same accuracy on flat road scenes”, Daniel Hernández, first author of the paper and researcher at the Computer Vision Center of Barcelona, states. Stixels (rectangular super pixels) were originally devised to represent the 3D scene as observed by stereoscopic or monocular imagery. The model proposed is based on Semantic Stixels: they use semantic cues in addition to depth to extract a Stixel representation. However, they are limited to flat road scenarios due to constant height assumption. In contrast, Hernández and his team overcome this drawback by incorporating a novel plane model together with effective priors on the plane parameters. “We propose a novel Stixel depth model (Slanted Stixels) that represents non-float roads better than previous methods but it is also slower. To keep it real-time, we also introduce an approximation that uses an over-segmentation of “Stixel Cuts” to speed up the algorithm maintaining similar performance”, Hernández explains. With this new method, instead of trying out all the different stixel combinations, only a reduced amount of possible stixels are tested, thus reducing the computational cost of creating these stixels and achieving real-time computation capabilities. In addition to this, researchers have introduced a new synthetic dataset inspired by Synthia. This dataset has been generated with the purpose of evaluating the proposed model (it includes non-flat roads). SYNTHIA-San Francisco (SYNTHIA-SF) consists of photorealistic frames rendered from a virtual city and comes with precise pixel-level depth and semantic annotations for 19 classes. This new dataset contains 2224 images that have been used to evaluate both depth and semantic accuracy. The dataset will be released shortly after the Conference. Related video:  https://www.youtube.com/watch?v=5y3bU9WL984 Related articles:  Best Industry Paper Award At BMVC2017 For The ADAS Team At CVC The Future Of Autonomous Cars: Understanding The City With The Use Of Videogames Towards A No Driver Scenario: Autonomous And Connected Cars At The Computer Vision Center