
CVC Open Day 2025
The Computer Vision Center hosted its annual Open Day, welcoming students and citizens interested discovering how research in computer vision is shaping the future. With two scheduled sessions, one in the morning and one in the afternoon, with around 50 attendees! We appreciate the strong interest shown by all participants!
The visit began with an introduction to the CVC’s mission and research areas, followed by a presentation of our Talent Program by Jordi Gonzalez, aimed at guiding students into the world of research. We had the pleasure of hearing from Dr. Carles Sánchez, who explained and presented a live demo of digital bronchoscopy, a system that highlights recent advances in medical imaging.



Coen Antens, Head of the Innovation Unit, and Ayan Banerjee, PhD student at CVC, guided participants through several interactive demos showcasing some of our ongoing projects. These included the Pedagogic Demonstrator, focused on facial expressions and generative AI, as well as the AI Comic Generation Demo, which explores creative applications of artificial intelligence. Meanwhile, the laboratory visits were divided into two parts: Pau Cano introduced attendees to our autonomous driving simulator and explained how we use it to test and develop intelligent vehicle systems, while Dr. Javier Vázquez and Danna Xue presented the color imaging lab, where visitors learned about ongoing research in computational color and image enhancement.

Visitors had the opportunity to meet our PhD students and current interns, who shared insights into their ongoing projects in areas such as document analysis, healthcare, and generative AI. They also spoke about their personal experiences working in AI research, offering valuable perspectives to those considering a similar path. Each of them presented their specific research area through short interactive explanations, allowing attendees to understand the real-world applications of their work.
Adrià Molina, PhD student at the CVC, spoke about his work in the Document Analysis Group, where he uses computer vision and deep learning to study historical documents. He is currently contributing to the Historical Archives Project, which focuses on making old handwritten texts easier to read and analyze with AI tools.
Dipam Goswami, PhD student at the CVC in the Learning and Machine Perception (LAMP) group, shared his research on continual learning. He explained how his work focuses on building AI systems that can keep learning new information over time without forgetting what they’ve already learned.
Héctor Laria, PhD student at the CVC, shared his research on data-efficient generative models in continual learning scenarios. He explained how his work focuses on creating AI systems that can learn continuously from new data without forgetting previous information, making them more efficient and adaptable over time.
Artemis Llabrés, PhD student at the CVC, presented her research on multimodal models for document understanding. She explained how her work combines visual and textual information to help AI systems better interpret and process documents, improving tasks like automated reading and analysis.
Carlota Criado, intern at the CVC, presented her project related to the analysis of colorectal medical images. She explained how computer vision techniques can support early detection and diagnosis by helping doctors better understand and interpret these types of medical images.
Paula Font, intern at the CVC, explained her work on developing a system to improve the search of related topics within document collections. Her project aims to help AI models better understand the content of documents so users can find relevant information more easily.
Gerard Asbert, intern at the CVC, talked about his project focused on the analysis of historical music scores. He explained how computer vision techniques are being used to help machines read and interpret old sheet music, making it easier to preserve and study musical heritage.
Marc Cases, intern at the CVC, explained how his project uses EEG and eye-tracking data from simulated driving sessions to better understand how people drive. He shared how this information is processed using specific software and how neural networks are being tested to help improve autonomous driving systems.







Finally, Ainoa Contreras presented the demonstration of our autonomous vehicle and provided insight into how the system operates and is being developed within the center.

To all the CVC researchers who contributed to this event: thank you for your involvement. We hope all attendees found the visit informative and engaging!