Semantic Representation: From Color to Deep Embeddings

November 4, 2019 at 9:00 am by

Place: Northwestern Polytechnical University, Xi’an, China.

Committee:

  • Dr. Zhunga Liu (School of Automation, Northwestern Polytechnical University)
  • Dr. Dingwen Zhang (School of Mechanical and Electrical Engineering, Xidian University)
  • Dr. Yang Yang (Institute of Automation, Chinese Academy of Sciences)
  • Dr. Shuai Hao (School of Electrical and Control Engineering, Xi’an University of Sciencen and Technology)
  • Dr. Lin Song (School of Information and Control Engineering, Xi’an University of Architecture and Technology)
  • Dr. Yongmei Cheng (School of Automation, Northwestern Polytechnical University)

Thesis Director:

  • Dr. Joost van de Weijer (Centre de Visió per Computador, Universitat Autònoma de Barcelona)
  • Dr. Yongmei Cheng (School of Automation, Northwestern Polytechnical University)
Abstract:

One of the fundamental problems of computer vision is to represent images with compact semantically relevant embeddings. These embeddings could then be used in a wide variety of applications, such as image retrieval, object detection, and video search. The main objective of this thesis is to study image embeddings from two aspects: color embeddings and deep embeddings.

In the first part of the thesis we start from hand-crafted color embeddings. We propose a method to order the additional color names according to their complementary nature with the basic eleven color names. This allows us to compute color name representations with high discriminative power of arbitrary length. Psychophysical experiments confirm that our proposed method outperforms baseline approaches. Secondly, we learn deep color embeddings from weakly labeled data by adding an attention strategy. The attention branch is able to correctly identify the relevant regions for each class. The advantage of our approach is that it can learn color names for specific domains for which no pixel-wise labels exists.

In the second part of the thesis, we focus on deep embeddings. Firstly, we address the problem of compressing large embedding networks into small networks, while maintaining similar performance. We propose to distillate the metrics from a teacher network to a student network. Two new losses are introduced to model the communication of a deep teacher network to a small student network: one based on an absolute teacher, where the student aims to produce the same embeddings as the teacher, and one based on a relative teacher, where the distances between
pairs of data points is communicated from the teacher to the student. In addition, various aspects of distillation have been investigated for embeddings, including hint and attention layers, semisupervised learning and cross quality distillation. Finally, another aspect of deep metric learning, namely lifelong learning, is studied. We observed some drift occurs during training of new tasks for metric learning. A method to estimate the semantic drift based on the drift which is experienced by data of the current task during its training is introduced. Having this estimation, previous tasks can be compensated for this drift, thereby improving their performance. Furthermore, we show that embedding networks suffer significantly less from catastrophic forgetting compared to classification networks when learning new tasks.

Pictures: