Category Archives: Uncategorized

Winner VOT-RGBT:

Lichao Zhang won the VOT-RGBT challenge this year. His work is published in the VOT 2019 workshop:
Multi-Modal Fusion for End-to-End RGB-T Tracking.

BLOG posts:

MeRGANs: generating images without forgetting (NIPS 2018 + video)
Mix and match networks (CVPR 2018)
Rotating networks to prevent catastrophic forgetting (ICPR 2018)
Deep network compression and adaptation (ICCV2017)
Learning RGB-D features for images and videos (AAAI 2017, TIP 2018)

CVPR 2019

Our paper on
Learning Metrics from Teachers: Compact Networks for Image Embedding has been accepted for presentation at CVPR 2019.

Schedule 2019

Tentative schedule:
July 4: Lu

June 13: Marc will present ‘Large Scale Incremental Learning’ (CVPR2019)

June 6: Mikel presents FearNet: Brain-Inspired Model for Incremental Learning

May 10: Lichao presents Learning Discriminative Model Prediction for Tracking
May 2: Kai presents his work and
Fei presents:
Efficient Variable Rate Image Compression with Multi-scale Decomposition Network, Trans. on CSVT, 2019

Apr 9: Marc and Lu present their own work.

Apr 4: Kai wil present
Hardness-Aware Deep Metric Learning (CVPR2019 oral)

Oguz presents his work.

Mar 28: Lichao will present his work.

Carola will present:
Large Scale Fine-Grained Categorization and Domain-Specific Transfer
Learning
(CVPR 2018)

Mar 7: Chenshen will present:
The lottery ticket hypothesis: finding sparse, trainable neural networks
(ICLR 2019 Oral)

Feb 28: Xialei will present:
On the Sensitivity of Adversarial Robustness to Input Data Distributions (ICLR 2019)

Oguz will present:
Semantic Regularisation for Recurrent Image Annotation
(CVPR2017)

Feb 14: Yaxing will present:
Unsupervised Learning of Object Landmarks through Conditional Image Generation
(NIPS2018)

Javad will present:
Large-Scale Visual Active Learning with Deep Probabilistic Ensembles
(ARXIV2018)

Thesis Aymen Azaza

Aymen Azaza has been awarded his PhD both at the UAB and the University of Monastir (Tunisie) for his thesis on ‘Context, Motion, and Semantic Information for Computational Saliency’.

TIP accepted on RGB-D

The journal on ‘Learning Effective RGB-D Representations for Scene Recognition’ got accepted for IEEE TIP. See here for a blog post on this paper.

SCHEDULE 2018

Dec 12 Lichao will present his recent work.

Kai will present Robust Classification with Convolutional Prototype Learning (CVPR2018)

Nov 28 Lu will present the paper: Learning to Compare: Relation Network for Few-Shot Learning.

Yaxing will present his recent research.

Nov 7: Carola will present Associating Inter-image Salient Instances for Weakly Supervised Semantic Segmentation (ECCV 2018)

Bogdan will present Progress & Compress: A scalable framework for continual learning (ICLR 2018)

Oct 31: Chenshen will present Large Scale GAN Training for High Fidelity Natural Image Synthesis

Oct 17: Oguz will present Deep Randomized Ensembles for Metric Learning (ECCV 2018)

Abel will present What do I Annotate Next? An Empirical Study of Active Learning for Action Localization

Oct 10: Marc presents Memory Aware Synapses: Learning what (not) to forget

Sept 19: Fei presents DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency (ECCV 2018)

Laura presents: Group Normalization (ECCV 2018)

Sept 5:Xialei presents Deep Matching Autoencoders (CVPR 2018)

Review on Computer Vision Techniques in Emergency Situations

Our review article on Computer Vision Techniques in Emergency Situations got accepted in Multimedia Tools and Applications.

New Postdoc

We are happy to announce that Luis Herranz has started as a new postdoc on the M2CR project.

NIPS wokshop on Adversarial Training

Two papers will be presented at the NIPS workshop on Adversarial Training. “Ensembles of Generative Adversarial Networks” investigates the usage of ensembles for GANs, and shows that they can significantly improve results. “Invertible Conditional GANs for image editing” adds a encoder to a conditional GAN, thereby allowing to do semantic editing of photos. The code is available here.