We have a postdoc position open at the moment link.
Welcome to the Learning and Machine Perception (LAMP) site.
The Learning and Machine Perception (LAMP) team at the Computer Vision Center conducts fundamental research and technology transfer in the field of machine learning for semantic understanding of visual data. The group works with a wide variety of visual data sources: from multispectral, medical imagery and consumer camera images, to live webcam streams and video data. The returning objective is the design of efficient and accurate algorithms for the automatic extraction of semantic information from visual media.
See here current open positions.
Code framework for Class-Incremental Learning
Check out our new framework for analysis of class-incremental learning (FACIL), which contains implementations of fourteen class-incremental algorithms and several baselines. It allows you to reproduce our results on CIFAR 100 presented in our survey paper.
Two papers at ICCV2021
Two papers have been accepted for the main track: Yaxing’s paper on TransferI2I: Transfer Learning for Image-to-Image Translation from Small Datasets. And Shiqi’s paper on Generalized Source-free Domain Adaptation (see project page).
CVPR 2021
Fei’s paper Slimmable compressive autoencoders for practical neural image compression has been accepted for CVPR.
Also four CVPR workshop papers have been accepted:
Two papers at NeurIPS 2020
Riccardo’s paper >RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning on continual learning of captioning systems, and Yaxing’s paper on transfer learning for image-to-image systems:DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs have been accepted !
4 papers at CVPR 2020
Four CVPR papers have been accepted:
- MineGAN: effective knowledge transfer from GANs to target domains with few images,
- Orderless Recurrent Models for Multi-label Classification,
- Semantic Drift Compensation for Class-Incremental Learning,
- Semi-supervised Learning for Few-shot Image-to-Image Translation.
and one workshop paper:
BLOG posts:
MeRGANs: generating images without forgetting (NIPS 2018 + video)
Mix and match networks (CVPR 2018)
Rotating networks to prevent catastrophic forgetting (ICPR 2018)
Deep network compression and adaptation (ICCV2017)
Learning RGB-D features for images and videos (AAAI 2017, TIP 2018)
Invited Talk at CLVISION2021
Three Continual Learning workshop papers
We have the following workshop papers at ICML:
- On Class Orderings for Incremental Learning,
- Disentanglement of Color and Shape Representations for Continual Learning,
And one at ECCV:
IJCV paper on multi-modal I2I
Yaxing’s paper on ‘Mix and match networks: cross-modal alignment for zero-pair image-to-image translation’ has been accepted for publication in IJCV.