CVCTech #1: Automatic segmentation for virtual Dressing rooms
Creating the dressing rooms of the future
Online fashion has experienced sustained growth in the last few years and this has also become even more consolidated during 2020 due to the restrictions caused by the COVID-19 pandemic. According to Statista data, the world income from clothing, footwear and accessories purchased online was around 758.3 million dollars by the end of 2020. Moreover, this growth is expected to continue unabated and generate revenues of around USD 1.2 billion by the end of 2025.
However, online fashion is also one of the sectors with the highest number of items returned with around 30% of returns of the number of clothes bought online. This supposes a high cost for companies, as it duplicates the logistical expenses, and also has a huge impact on the environment.
The CVC prototype for virtual dressing rooms tackles the doubts that many of us have while shopping online: “Will this garment fit me? How will this piece of clothing look on me?”. As the main reasons for returning clothes are because of incorrect sizing (38% of returns) or not being satisfied with the way they fit (15%), having a technology that allows you to get a first impression of how a piece of clothing might fit your body would help to decrease the dissatisfaction rate of online purchases and thus reduce costs and environmental impact.
To make it possible, the CVC Innovation Unit has taken advantage of the opportunities offered by the body part recognition technology to develop a first-stage prototype of a virtual dressing room. This prototype automatically segments the parts of the human body to place the different virtual garments in the proper position. The demonstration features different pieces of virtual clothing in a variety of prints, patterns and formats. The purpose of the application is to make easier the process of trying clothes while shopping, which would provide comfort for both the vendor and the customer, reducing the time and reducing costs.
The prototype was presented at Mobile World Congress 2022 (28 February - 3 March 2022) and at IoT Solutions World Congress 2022 (10-12 June 2022). During both events, hundreds of people came to the CVC booth to try out the demonstration and check how our virtual clothing collection fitted them. The virtual dressing room attracted the attention of professionals from different sectors, especially entrepreneurs from the fashion and retail industries.
The technology behind the virtual dressing rooms and other applications
The virtual dressing room prototype uses deep learning to automatically segment the different parts of the human body. In this way, the model segments the body from the background, detecting the limits of the body and classifying them according to the part to which they belong (right arm, left arm, right leg, left leg, torso, etc.). In this case, it does so with images taken from a 2D camera, as it is aimed to be used as a mobile application, through the default camera of most current phones.
Apart from its applications in the fashion industry, the body segmentation techniques can also have many other clear applications in several industries, such as:
- Collaborative robots in the industry
The industry is one of the fields in which AI is impacting the most. Many factories have already implemented AI-based solutions to automatize processes and improve productivity. In some cases, in these factories, there is a coexistence between robots and humans, who share the workspace and cooperate in several tasks. These robots might be able to share space with humans and is important that they guarantee the safety of workers when sharing space or interacting with each other, especially if they are manoeuvring dangerous workpieces or working at high speeds.
In this way, the CVC is collaborating on two projects: Looming Factory and TRREX. In the case of Looming Factory, CVC has worked together with Leitat Technological Center to develop a monitored environment in factories to avoid risky situations between robots and humans. Using cameras, the scene is monitored to track the robot and the human and automatically stop the robot if a dangerous situation occurs, such as the human being too close to the machine for its safety.
The other project, TRREX, in which CVC has participated alongside Infaimon and Leitat, also applies human pose estimation so that robots transporting material on production lines can obey the hand signals given by the operators, such as stop, turn left, continue, etc.
- Automatic segmentation for creative and audiovisual industries
Body segmentation can also be of great use in the creative and audiovisual sectors. Currently, in video post-production, segmentation is done manually. An automatic segmentation allows the reduction of time and effort in this task and can be applied in several cases in video and image post-production, graphic design, marketing, avatar generation, etc.
The CVC is also collaborating on a project in this direction. In this case, as a part of the project VIVIM, the CVC has developed a module for automatic segmentation in audiovisual post-production.
 Looming Factory is a consortium coordinated by the UPC and with the partnership of CVC, I2CAT, UOC, CIM-UPC, UPF, Leitat, Eurecat, UB and CTM, with the aim of grouping, consolidating and guiding current research in Industry 4.0 of the main R&D&I Centers in Catalonia towards industrial verification demonstrators and validation of current research results. This project is co-funded by the European Regional Development Fund in the framework of the ERDF 2014-2020 Operational Program for Catalonia.
 The project TRREX (Enabling Technologies for Extended Range Robots for the flexible factory) is a consortium of companies and centers (Infaimon, LEITAT, TEKNIKER, CVC, IRII, CARTIF, CIRCE Foundation and MCIA Centre) and funded by Programa Estratégico de Consorcios de Investigación Empresarial Nacional (CIEN) del Centro para el Desarrollo Tecnológico Industrial (CDTI) with the aim of researching and progressing in technologies that contribute to the deployment of mobile industrial robots for the factories of the future.
 ViVIM is a collaborative project aimed at implementing an innovative media production and consumption system. ViVIM consortium is formed by I2CAT, the Catalan Broadcasting Corporation (CCMA), Eurecat, Visyon and CVC and is funded by the Agency for Business Competitiveness of the Government of Catalonia within the RIS3CAT Media Community with the reference number COMRDI18-1-008.