Lifelike Humans: Detailed Reconstruction of Expressive Human Faces

enero 12, 2021 at 12:00 pm by

Place: Streaming

Thesis Directors:

Dr. Felipe Lumbreras (Centre de Visió per Computador, Universitat Autònoma de Barcelona)

Dr. Antonio Agudo (Institut de Robòtica i Informàtica Industrial, CSIC-UPC)

Thesis Committee:

Dr. Xavier Binefa Valls (Departament de Tecnologies de la Informació i les Comunicacions. Universitat Pompeu Fabra)

Dr. Petia Ivanova Radeva (Centre de Visió per Computador – Department Matemàtica Aplicada i Anàlis, Universitat de Barcelona)

Dr. Angel Sappa (Centre de Visió per Computador – Escuela Superior Politécnica del Litoral (ESPOL))


Developing human-like digital characters is a challenging task since humans are used to recognizing our fellows, and find the computed generated characters inadequately humanized. To fulfill the standards of the videogame and digital film productions it is necessary to model and animate these characters the most closely to human beings. However, it is an arduous and expensive task, since many artists and specialists are required to work on a single character. Therefore, to fulfill these requirements we found an interesting option to study the automatic creation of detailed characters through inexpensive setups. In this work, we develop novel techniques to bring detailed characters by combining different aspects that stand out when developing realistic characters, skin detail, facial hairs, expressions, and microexpressions. We examine each of the mentioned areas with the aim of automatically recover each of the parts without user interaction nor training data. We study the problems for their robustness but also for the simplicity of the setup, preferring single-image with uncontrolled illumination and methods that can be easily computed with the commodity of a standard laptop. A detailed face with wrinkles and skin details is vital to develop a realistic character. In this work, we introduce our method to automatically describe facial wrinkles on the image and transfer to the recovered base face. Then we advance to facial hair recovery by resolving a fitting problem with a novel parametrization model. As of last, we develop a mapping function that allows transfer expressions and microexpressions between different meshes, which provides realistic animations to our detailed mesh. We cover all the mentioned points with the focus on key aspects as (i) how to describe skin wrinkles in a simple and straightforward manner, (ii) how to recover 3D from 2D detections, (iii) how to recover and model facial hair from 2D to 3D, (iv) how to transfer expressions between models holding both skin detail and facial hair, (v) how to perform all the described actions without training data nor user interaction. In this work, we present our proposals to solve these aspects with an efficient and simple setup. We validate our work with several datasets both synthetic and real data, prooving remarkable results even in challenging cases as occlusions as glasses, thick beards, and indeed working with different face topologies like single-eyed cyclops.