Welcome to the Learning and Machine Perception (LAMP) site.

The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.

See here current open positions.

3 papers at ICCV2019

Lichao’s paper on visual tracking Learning the Model Update for Siamese Trackers and Hamed’s paper on active learning Active Learning for Deep Detection Neural Networks (+suppl. material) (github) have been accepted for ICCV. Also, David Berga has a paper with Xavier Otazu on saliency detection: ‘SID4VAM: A Benchmark Dataset With Synthetic Images for Visual Attention Modeling’. In addition we also have two workshop papers on Multi-Modal Fusion for End-to-End RGB-T Tracking and on Temporal Coherence for Active Learning in Videos.

BLOG posts:

MeRGANs: generating images without forgetting (NIPS 2018 + video)
Mix and match networks (CVPR 2018)
Rotating networks to prevent catastrophic forgetting (ICPR 2018)
Deep network compression and adaptation (ICCV2017)
Learning RGB-D features for images and videos (AAAI 2017, TIP 2018)

CVPR 2019

Our paper on
Learning Metrics from Teachers: Compact Networks for Image Embedding has been accepted for presentation at CVPR 2019.

2 NIPS 2018 accepted

Two NIPS papers got accepted. One work on ‘Image-to-image translation for cross-domain disentanglement’ (pdf) and one titled ‘Memory Replay GANs: Learning to Generate New Categories without Forgetting’ (pdf + blog + video).

1 ECCV and 1 BMVC accepted

Our research on transfering GANs has been accepted for ECCV 2018: ‘Transferring GANs: generating images from limited data‘.

Also our work on out of distribution detection has been accepted for BMVC 2018. Also see the project page.

PR on fine-grained object detection

The journal of Carola on ‘Saliency for Fine-grained Object Recognition in Domains with Scarce Training Data’ (pdf) has been accepted for publication in Pattern Recognition.

PAMI on learning from rankings

Xialei’s PAMI ‘Exploiting Unlabeled Data in CNNs by Self-supervised Learning to Rank’ (pdf) has been accepted for publication at PAMI.

MediaEval Challenge 2018

Laura has obtained ‘distinctive mention’ in the Multimedia Satellite Task: Emergency Response for Flooding Events at MediaEval 2018.

TIP accepted on tracking FIR

The paper ‘Synthetic data generation for end-to-end thermal infrared tracking’ of Lichao Zhang has been accepted for publication at IEEE TIP.

Thesis Aymen Azaza

Aymen Azaza has been awarded his PhD both at the UAB and the University of Monastir (Tunisie) for his thesis on ‘Context, Motion, and Semantic Information for Computational Saliency’.

talk at BCN.AI

Joost gave a talk at BCN.AI which can be viewed on youtube.