Welcome to the Learning and Machine Perception (LAMP) site.

The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.

See here current open positions.

BLOG posts:

Mix and match networks (CVPR 2018)
Rotating networks to prevent catastrophic forgetting (ICPR 2018)
Deep network compression and adaptation (ICCV2017)

2 NIPS 2018 accepted

Two NIPS papers got accepted. One work on ‘Image-to-image translation for cross-domain disentanglement’ (pdf) and one titled ‘Memory Replay GANs: Learning to Generate New Categories without Forgetting’ (pdf).

1 ECCV and 1 BMVC accepted

Our research on transfering GANs has been accepted for ECCV 2018: ‘Transferring GANs: generating images from limited data‘.

Also our work on out of distribution detection has been accepted for BMVC 2018. Also see the project page.

4 CVPR’s accepted !

Members of LAMP got four accepted CVPR 2018 papers. The papers ‘Mix and match networks: encoder-decoder alignment for zero-pair imagetranslation‘ has been accepted, for more information see the project page. Xialei has a paper on crowd counting: ‘Leveraging Unlabeled Data for Crowd Counting by Learning to Rank‘ or see the project page.

Also Aitor Alvarez got his paper ‘On the Duality Between Retinex and Image Dehazing‘ accepted in collaboration with Adrian Galdran from INESC TEC Porto, Portugal and Javier Vazquez-Corral, Marcelo Bertalmıo from Universitat Pompeu Fabra. Also Abel Gonzalez got his paper ‘Objects as context for detecting their semantic parts’. The research was performed in his previous group CALVIN with Vitorrio Ferrari.

2 ICCV 2018 papers accepted

The papers ‘RankIQA: Learning from Rankings for No-reference Image Quality Assessment‘ and ‘Domain-adaptive deep network compression‘ have been accepted for ICCV. Papers and project pages will be soon available. We also have a paper on ‘Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB‘ in the workshop on Physics Based Vision meets Deep Learning (PBDL)

talk at BCN.AI

Joost gave a talk at BCN.AI which can be viewed on youtube.

TIP accepted on RGB-D

The journal on ‘Learning Effective RGB-D Representations for Scene Recognition’ got accepted for IEEE TIP.

CVIU accepted on saliency detection

Aymen’s paper on Context Proposals for Saliency Detection got accepted for CVIU. This work shows how to extend object proposals with context proposals which allow for a precise description of the object’s context. It is shown that these can significantly improve the performance of saliency detection.

2 ICPR papers + 1 ICIP accepted

The papers ‘Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting‘ with (project page) and ‘Weakly Supervised Domain-Specific Color Naming Based on Attention‘ with (project page) have been accepted for publication at ICPR 2018 in Beijing. The paper ‘Learning Illuminant Estimation from Object Recognition‘ has been accepted for ICIP. PDF will follow soon.

Convolutional Neural Networks for Texture Recognition and Remote Sensing

Our work on ‘Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification‘ has been accepted for publication in ISPRS Journal of Photogrammetry and Remote Sensing.

World Mobile Congress

The presentation at the world mobile congress on the CVPR paper on crowd counting received quite some press coverage.