Experimental Analysis and Complexity Scaling of Denoising Diffusion Probabilistic Models for Image Synthesis
Abstract:
This talk will present an experimental study on the behaviour, scalability, and generative quality of Denoising Diffusion Probabilistic Models (DDPMs) for image synthesis across datasets of increasing complexity.
Three case studies are analysed and compared based on FID (Fréchet Inception Distance) evaluation metric and the performance of downstream classifiers to discriminate the synthetically generated images.
- Baseline: a class-conditional DDPM on MNIST (greyscale, 28x28 resolution)
- Scaling to natural images: DDMP on CIFAR-10 (RGB, 32x32 resolution)
- Medical images: DDMP on ISIC 2019 dermatoscopic dataset (RGB, 192×192 resolution, unbalanced)
Lessons learned: DDPMs are scalable but struggle with subtle, domain-specific textures and computational overhead. Performance degrades as semantic and visual complexity increase. Lesion structure of ICIS medical images is overall captured but fine diagnostic clinical details are harder to reproduce.
FID metric correlates with the image interpretability for MNIST and CIFAR, but is much less relevant for the ICIS medical data.
Short bio:
Petia Georgieva is Professor, DSc, with the University of Aveiro, Portugal where she teaches machine learning and artificial neural networks modelling. She is a Researcher with the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), Portugal. Her research interests include machine learning, deep learning and data mining with application focus on image processing, wireless communications, and brain computer interfaces. She has coauthored more than 150 publications registered in Scopus, supervised 14 Ph.D. dissertations and 66 Master theses. She is a Member of the Board of Governors of International Neural Network Society (INNS) and an editor in journal Neural Networks and Pattern Recognition.








