Welcome to the Learning and Machine Perception (LAMP) site.

The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.

2 ICCV papers accepted

The papers ‘RankIQA: Learning from Rankings for No-reference Image Quality Assessment‘ and ‘Domain-adaptive deep network compression‘ have been accepted for ICCV. Papers and project pages will be soon available. We also have a paper on ‘Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB‘ in the workshop on Physics Based Vision meets Deep Learning (PBDL)

Multimodal Translation Challenge

We obtained first rank on the WMT17 Shared Task on Multimodal Translation in our submission within the M2CR project together with the Le Mans natural language processing group. The system is explained in this WMT2017 paper ‘LIUM-CVC Submissions for WMT17 Multimodal Translation Task‘.

new BMVC accepted

We got a paper accepted with Rada Deeb and Damien Muselet from the Université Jean Monnet at BMVC this year on ‘3D color charts for camera spectral sensitivity estimation’ (pdf).

New TIP accepted

Our paper Improved Recursive Geodesic Distance Computation for Edge Preserving Filter has been accepted for publication in IEEE TIP. Code can be found here.

New Postdoc

We are happy to announce that Luis Herranz has started as a new postdoc on the M2CR project.

WACV2017 paper accepted

Our paper ‘Bandwidth limited object recognition in high resolution imagery (pdf)‘ has been accepted at WACV 2017. The project page with dataset is here.

NIPS wokshop on Adversarial Training

Two papers will be presented at the NIPS workshop on Adversarial Training. “Ensembles of Generative Adversarial Networks” investigates the usage of ensembles for GANs, and shows that they can significantly improve results. “Invertible Conditional GANs for image editing” adds a encoder to a conditional GAN, thereby allowing to do semantic editing of photos. The code is available here.

Master Thesis Defended

Congratulations to all our master students that defended their thesis last 16th of September during the 3rd Annual Catalan Meeting on Computer Vision!

  • Olaia Artieda Aguirre
  • Sergi Canyameres Masip
  • Xialei Liu
  • Arcadi Llanza Carmona
  • Guim Perernau Guirao (Best Thesis Award)

We also received the Best Poster Award for our poster based on the ‘Hierarchical Part Detection with Neural Networks‘.

WMT16 paper accepted

Our paper “Does Multimodality Help Human and Machine for Translation and Image Captioning?” has been accepted on the ACL 2016 First Conference on Machine Translation. Check some results here.

Business Track Winners

Two of our Ph.D. students, Laura López and Marc Masana, participated in the Accenture Digital Datathon. The competition consisted on generating a model for traffic accident prediction in the city of Barcelona. Our student’s team won the Business Track by unanimous decision of the jury.

] From left to right, the four team members (Andrej, Marc, Laura and David) and Lluis Puerto, the RACC Technical Director.