Visual recognition in the wild: learning from rankings in small domains and continual learning in new domains

December 16, 2019 at 11:00 am by

Place: CVC Sala d’actes

Committee:

  • Dr. Tinne Tuytelaars (Department of Electrical Engineering, Katholieke Universiteit Leuven)
  • Dr. Marta R. Costa-Jussa (Signal Theory and Communications Department, Universitat Politècnica de Catalunya)
  • Dr. Dimosthenis Karatzas (Centre de Visió per Computador, Universitat Autònoma de Barcelona)
  • Dr. David Berga Garreta (Centre de Visió per Computador, Universitat Autònoma de Barcelona)
  • Dr. Petia Radeva (Department of Mathematics and Computer Science, Universitat de Barcelona)

Thesis Directors:

  • Dr. Joost van de Weijer (Centre de Visió per Computador, Universitat Autònoma de Barcelona)
  • Dr. Andrew D. Bagdanov (Media Integration and Communication Center, University of Florence)

Abstract:

Deep convolutional neural networks (CNNs) have achieved superior performance in many visual recognition application, such as image classification, detection and segmentation. In this thesis we address two limitations of CNNs. Training deep CNNs requires huge amounts of labeled data, which is expensive and labor intensive to collect. Another limitation is that training CNNs in a continual learning setting is still an open research question. Catastrophic forgetting is very likely when adapting trained models to new environments or new tasks. Therefore, in this thesis, we aim to improve CNNs for applications with limited data and to adapt CNNs continually to new tasks.

Self-supervised learning leverages unlabelled data by introducing an auxiliary task for which data is abundantly available. In the first part of the thesis, we show how rankings can be used as a proxy self-supervised task for regression problems. Then we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning. We then apply our framework on two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both, we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results. We further show that active learning using rankings can reduce labeling effort by up to 50\% for both IQA and crowd counting.

In the second part of the thesis, we propose two approaches to avoiding catastrophic forgetting in sequential task learning scenarios. The first approach is derived from Elastic Weight Consolidation, which uses a diagonal Fisher Information Matrix (FIM) to measure the importance of the parameters of the network. However the diagonal assumption is unrealistic. Therefore, we approximately diagonalize the FIM using a set of factorized rotation parameters. This leads to significantly better performance on continual learning of sequential tasks. For the second approach, we show that forgetting manifests differently at different layers in the network and propose a hybrid approach where distillation is used in the feature extractor and replay in the classifier via feature generation. Our method addresses the limitations of generative image replay and probability distillation (i.e. learning without forgetting) and can naturally aggregate new tasks in a single, well-calibrated classifier. Experiments confirm that our proposed approach outperforms the baselines and some start-of-the-art methods.

Pictures:

 

 

Watch the video presentation