Welcome to the Learning and Machine Perception (LAMP) site.

The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.

See here current open positions.

BLOG posts:

MeRGANs: generating images without forgetting (NIPS 2018 + video)
Mix and match networks (CVPR 2018)
Rotating networks to prevent catastrophic forgetting (ICPR 2018)
Deep network compression and adaptation (ICCV2017) Learning RGB-D features for images and videos (AAAI 2017, TIP 2018)

2 NIPS 2018 accepted

Two NIPS papers got accepted. One work on ‘Image-to-image translation for cross-domain disentanglement’ (pdf) and one titled ‘Memory Replay GANs: Learning to Generate New Categories without Forgetting’ (pdf + blog + video).

1 ECCV and 1 BMVC accepted

Our research on transfering GANs has been accepted for ECCV 2018: ‘Transferring GANs: generating images from limited data‘.

Also our work on out of distribution detection has been accepted for BMVC 2018. Also see the project page.

4 CVPR’s accepted !

Members of LAMP got four accepted CVPR 2018 papers. The papers ‘Mix and match networks: encoder-decoder alignment for zero-pair imagetranslation‘ has been accepted, for more information see the project page and this BLOG POST. Xialei has a paper on crowd counting: ‘Leveraging Unlabeled Data for Crowd Counting by Learning to Rank‘ or see the project page.

Also Aitor Alvarez got his paper ‘On the Duality Between Retinex and Image Dehazing‘ accepted in collaboration with Adrian Galdran from INESC TEC Porto, Portugal and Javier Vazquez-Corral, Marcelo Bertalmıo from Universitat Pompeu Fabra. Also Abel Gonzalez got his paper ‘Objects as context for detecting their semantic parts’. The research was performed in his previous group CALVIN with Vitorrio Ferrari.

2 ICCV 2017 papers accepted

The papers ‘RankIQA: Learning from Rankings for No-reference Image Quality Assessment‘ and ‘Domain-adaptive deep network compression‘ have been accepted for ICCV, see here for a BLOG post on this research. . We also have a paper on ‘Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB‘ in the workshop on Physics Based Vision meets Deep Learning (PBDL).

PAMI on learning from rankings

Xialei’s PAMI ‘Exploiting Unlabeled Data in CNNs by Self-supervised Learning to Rank’ (pdf) has been accepted for publication at PAMI.

MediaEval Challenge 2018

Laura has obtained ‘distinctive mention’ in the Multimedia Satellite Task: Emergency Response for Flooding Events at MediaEval 2018.

TIP accepted on tracking FIR

The paper ‘Synthetic data generation for end-to-end thermal infrared tracking’ of Lichao Zhang has been accepted for publication at IEEE TIP.

Thesis Aymen Azaza

Aymen Azaza has been awarded his PhD both at the UAB and the University of Monastir (Tunisie) for his thesis on ‘Context, Motion, and Semantic Information for Computational Saliency’.

talk at BCN.AI

Joost gave a talk at BCN.AI which can be viewed on youtube.

TIP accepted on RGB-D

The journal on ‘Learning Effective RGB-D Representations for Scene Recognition’ got accepted for IEEE TIP. See here for a blog post on this paper.