Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval

CVC has a new PhD on its record!

UAB PhD Special Award

Sounak Dey successfully defended his dissertation on Computer Science on November 23, 2020, and he is now Doctor of Philosophy by the Universitat Autònoma de Barcelona. Dr. Dey's was recognised with a UAB PhD Special Award for the significant contributions of his thesis to the image retrieval field on May 19, 2023.

Download thesis

What is the thesis about?

This thesis presents several contributions to the literature of sketch-based image retrieval (SBIR). In SBIR the first challenge we face is how to map two different domains to common space for effective retrieval of images, while tackling the different levels of abstraction people use to express their notion of objects around while sketching. To this extent we first propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. Then we have also investigated different query types possible to encompass people's dilema in sketching certain world objects. For this we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. Finally, we explore the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognises two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended. We also in this dissertation pave the path to the future direction of research in this domain.

Keywords: Computer Vision, Pattern Recognition, Deep Learning, Sketch-based Image Retrieval, Zero-shot learning, Cross-modal retrieval, Multi-object Multi-modal retrieval, Hungarian Loss, QuickDraw Extended Dataset.