Two computational neuroscience PhD students at CVC, E-life ambassadors

Xim Cerdà-Company and David Berga are on their last year of their computational neuroscience PhD and have recently become ambassadors of E-life, one of the most renowned open science journals worldwide. E-life has an ambassador network which is in charge of promoting the values and vision of Open Science and the philosophy of the journal. … Read more

Computer Vision and Deep Learning at the Mobile World Congress 2018

The Computer Vision Center took part of this year’s edition of the Mobile World Congress presenting a new crowd counting technique that uses Computer Vision and Deep Learning for the estimation of people in gatherings. The CVC had an important presence in the media and participated in the Mobile Talks organised by Moble World Capital Barcelona.

More than 100.000 people attended this year’s Mobile World Conference, more exactly 107.000 attendees, a few less than past years, but with higher and more interesting profiles. The Computer Vision Center attended with a stand within the Catalonia pavilion, as has been doing for the past 5 years, and presented a demo based on the work of Xialei Liu and Dr. Joost Van de Weijer, which will be presented at this year’s CVPR. The demo counted, real time, the amount of people gathered within the congress square and was a huge success both for the media as for business.

Dr. Joost Van de Weijer gave a press Conference on the second day of the Mobile to more than 20 different media and newspaper journalists presenting his novel technique. The Press Conference was introduced by Mr. Jordi Puignero, the Telecommunications, Cybersecurity and Digital society Secretary of the Government of Catalonia and president of CVC, and presented by Dr. Josep Lladós, CVC director.

Dr. Fernando Vilariño participated at this year’s Mobile Talks, invited by Mobile World Capital Barcelona, in a discussion about Digital Economy with the innovation journalist Roc Fages. Mobile Talks are intended to give visitors the opportunity of in depth views of different current situations. Dr. Vilariño explained the importance of digital transformation along with social innovation, and the need of both of them to evolve alongside.

The CVC team came back from Mobile World Congress with a good impression of the event and plenty of new contacts to be worked on from now. Mobile World Congress 2019 will soon be right by the corner.

Find CVC’s MWC presence in the Media here.

 

5 CVC papers presented at CVPR 2018

This year, CVC’s presence at CVPR has had a total of 5 papers presented featuring 8 CVC researchers: Xialei Liu, Yaxing Wang, Aitor Alvarez Gila, Dr. Abel González, Dr. Joost Van de Weijer, Dr. Luis Herranz, Dr. Sergio Escalera and Dr. Meysam Madadi. Congratulations!

The Conference took place at Salt Lake City, Utah, from the 18th to the 22nd of June.

Papers can be found here:

Objects as context for detecting their semantic parts

On the Duality Between Retinex and Image Dehazing

Leveraging Unlabeled Data for Crowd Counting by Learning to Rank

Mix and match networks: encoder-decoder alignment for zero-pair image translation

Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals

 

Videos: 

 

Related article: Dena Bazazian, Organiser Of The Women In Computer Vision CVPR 2018 Workshop

 

International Day of Women and Girls in Science

Marie Curie, Hypatia of Alexandria and Rosalind Franklin are probably some of the few names that sound familiar when talking about women in science. Unfortunately, not many more names stand out and neither do their contributions to science and, of course, society. Throughout history, there have been many brilliant women in science whose work has … Read more

Digitus II: Releasing the content locked in manuscripts

If we form a straight line with all the historical documents stored within Catalan archives, starting in Barcelona we would arrive up to Paris. The distance between these two cities is about 840 km, which is exactly the same that would configure the materialized assets ranging from the 9th century to the present, stored, preserved … Read more

The patterns that link migraines to cities

Brick buildings following the same pattern, huge and modern office constructions framed with straight and monotonous lines, long escalators, windows placed in a regular pattern, blinds forming horizontal lines, sparkling lights and deafening sounds. For many, this description fits in fully with the usual elements of a contemporary city. Nevertheless, for anyone who is suffering … Read more

The car in the matrix: CARLA

CARLA (Car Learning to Act) is an open-source simulator desgined within Academia as an autonomous driving research tool. Developed by the Computer Vision Center, along with Intel Labs and the Toyota Research Institute, it is a platform in which to support the development, training and validation of autonomous urban driving systems. CARLA was presented at the First Conference in Robot Learning at Mountain View, CA by CVC/UAB PhD candidate Felipe Codevilla.

Training an autonomous car to drive is a challenge that is being tackled in research all over the world. Cars are performing simple driving tasks on real, actual roads. However, teaching these cars to drive with zero incidents and in the most varied scenarios as possible isn’t trivial. There are plenty of rare and odd situations of which one sole car might never encounter and it needs to know how to react real-time.

Imagine a child running towards the road, or a very dusty evening with the sun lying low and frontally into the car’s cameras,” explains Felipe Codevilla, co-author of the paper  ‘CARLA: An open urban driving simulator’. “You expect the car to be able to respond to these situations, but you need to have trained it first”. CARLA enables researchers to trigger the different, unexpected situations a car might come up against. As added by Dr. Antonio López, head of the ADAS team at CVC and also co-author of the paper. “CARLA allows us to drive in different environments, lighting conditions, weather changes or urban scenarios”.

The physical world represents clear difficulties for autonomous driving research, not only infrastructure costs and logistic difficulties, but funds and manpower involved are high and costly. Furthermore, a single vehicle is far from sufficient for collecting the requisite data that cover the multitude of corner cases that need to be processed for both training and validations. CARLA has been developed to overcome such challenges and give researchers a new, open source, research-oriented platform.

Although the use of simulators for autonomous driving is not new, and videogame technology has been used to train autonomous cars in the past, existing simulation platforms are limited, lacking numerous basic elements such as pedestrians, traffic rules, intersections, or other complications that may arise constantly in real life driving.

Commercial videogames, such as The Grand Theft Auto have also been tested in autonomous driving research, but the privileged information that the car needs to comprehend its environment remains unavailable in videogames due to their commercial nature. CARLA, being built from zero for autonomous driving research purposes, gives the car access to privileged information such as GPS coordinates, speed, acceleration and detailed data on a number of infractions.

The sensors that the virtual car has within the simulator are composed by RGB cameras and by pseudo-sensors that provide ground-truth depth and semantic segmentation. Camera parameters include 3D location, 3D orientation with respect to the car’s coordinate system, field of view and depth of field. The semantic segmentation sensor provides a total of 12 semantic classes: road, lane-marking, traffic sign, sidewalk, fence, pole, wall, building, vegetation, vehicle, pedestrian and other.

The simulator not only recreates a dynamic world but also provides a simple interface between this world and the agent that interacts with it. The platform has a highly realistic environment and enables users to use a set of sensors to guide the car. By not using metric maps, visual perception becomes a crucial asset for the vehicle.

The authors carried out three approaches when testing autonomous driving in CARLA: Firstly, a classic modular pipeline; secondly, an end-to-end model trained via imitation learning, and finally, an end-to-end model trained via reinforcement learning. The first approach, a classic modular pipeline, structured the driving task into three subsystems: perception, planning and continuous control.

In the second approach, the imitation learning in an end-to-end model, researchers used a dataset of driving traces recorded by human drivers, collecting a total of 14 hours of driving data for training. The third and last approach, the reinforcement learning model, trained the deep network based on a reward signal provided by the environment, with no human traces. Conclusions were that the performance of two of the systems (modular pipeline vs the imitation learning approach) was very close under most of the testing conditions, differing by less than 10%.

When performance was compared between the imitation learning and reinforcement learning models, they realised that the agent instructed with reinforcement learning performed significantly worse than the one trained by human imitation. The model based on reinforcement learning was prepared using a significantly larger amount of data; thus, the results suggested that an out-of-the-box reinforcement learning algorithm is not sufficient for the driving task and more research needs to be developed further within this line of study.

Performance isn’t optimal in any of the tested methods” states Felipe Codevilla when asked for a conclusion. Results showed that giving cars new environments and a set of situations they hadn’t encountered in previous training poses still a serious challenge. Experts now expect that CARLA, being open source, will enable a broad community to actively engage in autonomous driving research.

More information at Carla.org.

Reference:

A. Dosovitskiy, G. Ros, F. Codevilla, A. López, V. Koltun (2017): CARLA: An Open Urban Driving Simulator


Related articles: 

CARLA in the Media

Towards A No Driver Scenario: Autonomous And Connected Cars At The Computer Vision Center

The Future Of Autonomous Cars: Understanding The City With The Use Of Videogames

Compitiendo Contra La Inteligencia Artificial Del Coche Autónomo

CVC at the Smart City Expo 2017

CVC was present at this year’s edition of the Smart City Expo in Barcelona within the Catalonia booth. We presented CARLA, an open-source simulator for autonomous driving research and also brought in our small prototype of autonomous vehicle in order to show the work the CVC is doing within this area.

Have a look at our Smart City Expo Moment in Twitter: https://twitter.com/i/moments/935551269154025472