Tutorial: Normalizing Flows and Invertible Neural Networks in Computer Vision and the Sciences

Speakers: Lars Kühmichel, Peter Sorrenson, Felix Draxler (Heidelberg University)

Time: 13:30-17:00
Coffe break: 15:00-15:30

Abstract

In recent years, machine learning generative models have emerged as a powerful paradigm for a variety of applications spanning computer vision and the sciences. This comprehensive three-hour tutorial aims to provide a thorough introduction to the fundamental concepts and state-of-the-art applications in generative modeling.

We first delve into the foundational models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Invertible Neural Networks, Diffusion Models, and Flow Matching. We explain the underlying mechanisms driving each model's ability to train, sample and estimate likelihoods and give intuitive explanations and illustrative examples.

We then explore a wide range of real-world applications in computer vision and the sciences, such as image generation, representation learning, manifold learning, out-of-distribution detection, molecule generation, and solving inverse problems.

Finally, the tutorial will discuss the frontiers of generative modeling, highlighting the ongoing challenges and future prospects of the field.

Note that the Tutorial runs on 19/9/2023, 13:15 – 17:00, in parallel with the workshop.