The fitting rooms of the future
Researchers from the Computer Vision Center (CVC) and the University of Barcelona (UB) have developed, using deep learning, CLOTH3D, the first 3D big scale synthetic dataset for simulating clothes on top of different body shapes. This dataset is the first step to allow virtual enhanced try-ons experience.
Every day, more and more people buy their clothes using virtual platforms, and the current pandemic situation is even speeding up this trend. The advantages of this new way of shopping are evident, but it has some shortcomings too. One of the most important is that people cannot try on the clothes before receiving it at their place. To solve this problem, Artificial Intelligence and deep learning are playing a key role, since they are allowing modelling, recovery and generation of 3D model clothes. These models will mean a breakthrough for enhanced virtual try-ons experience, reducing designer's and animator’s workload.
Nowadays, it exits models for simulating clothes on top of body shapes, but they are almost focused on 2D. This is because 3D models need an enormous amount of data, and available 3D cloth data are very scarce. There are three main strategies in order to produce data of 3D-dressed humans: 3D scans, 3D from conventional images, and synthetic generation. In the case of 3D scans, they are costly, and at most, they can produce a single mesh (human + garments). Alternatively, datasets that infer 3D geometry of clothes from conventional images are inaccurate and cannot properly model cloth dynamics. Finally, synthetic data is easy to generate and is ground truth error-free.
Researchers from the Human Pose Recovery and Behavior Analysis Group at the Computer Vision Center (CVC) - University of Barcelona (UB) chose this last path and developed CLOTH3D, the first big-scale synthetic dataset of 3D clothed human sequences, which was recently published in the Computer Vision – ECCV 2020 journal. “As a lot of data is needed for developing 3D models, we decided to generate our own data. We have designed and released the biggest dataset of this kind with a strong outfit variability and rich cloth dynamics”, explained Hugo Bertiche (UB – CVC).
CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. CLOTH3D is unique in terms of garment, shape, and pose variability, including more than 2 million 3D samples. “We developed a generation pipeline that creates a unique outfit for each sequence in terms of garment type, topology, shape, size, tightness and fabric. While other datasets contain just a few different garments, ours is the biggest data set in this field nowadays, with thousands of different garments. But we did not focus only on its development, we also published it in open access, so it is available for all types of audiences”, Dr. Sergio Escalera (CVC-UB) pointed out.
But cloth manufacturing industry is not the only one that could take advantage of this dataset, “the entertainment industry could also benefit, since computer-generated image movies and videogames could be even more realistic”, argue Dr. Meysam Madadi (CVC). But there is still plenty of work to do, “understanding 3D garments through deep learning is still in early stages. On one hand, while our dataset covers most day-to-day garment variability, outfit styles are only limited by imagination. Faster, automatic and smart garment design could lead to many very interesting applications. On the other hand, cloth dynamics are extremely complex and challenging, and they have been barely tackled in very naive ways. Further exploring is a must for this community. Finally, real fabrics are much more than what simulators usually provide, deep learning has yet to find the proper way to model extremely fine and chaotic details such as wrinkles and also objects of arbitrary geometry related to outfits, such as hats, glasses, gloves, shoes, trinkets and more”, concluded H. Bertiche.