While machine learning models are continually improving, for most tasks they fail to achieve perfect predictive performance. In order to be a valuable tool in decision-making under uncertainty, it stands to reason that we want some statistical guarantees on the quality of probabilistic predictive models. Research into calibration regained popularity after repeated empirical observations of overconfidence in Deep Neural Networks. This renewed interest sparked initiative relative to calibration metrics & remediation of miscalibration. This talk will focus on how to make use and sense of probabilistic predictions, with a primer on confidence estimation, calibration, and failure prediction.
Jordy Van Landeghem received an M.A. degree in Linguistics in 2015 and an M.Sc. degree in artificial intelligence in 2017, both from KU Leuven, where he is currently pursuing a PhD degree in computer science. He completed research internships at Oracle and Nuance Communications, and is currently the lead AI Researcher at Contract.fit. His industrial PhD project titled “Intelligent Automation for AI-driven Document Understanding” focuses on the fundamentals of probabilistic deep learning, with an emphasis on calibration, uncertainty quantification, and out-of-distribution robustness in order to obtain more reliable machine learning systems. Currently, he is leading the creation of the ICDAR 2023 competition on Document UnderstanDing of Everything (DUDE 😎) with more follow-up works on efficient and robust document understanding in collaboration with CVC partners.