Tutorials

 

Tutorial A) Tensors in Computer Vision and Image Processing

Klas Nordberg, ( Linköping University, Sweden )

Tuesday, August 30th, 9-12:30

Tensors in Computer Vision and Image Processing

The concept of tensors has been around in image processing and computer vision a few decades, with two main applications areas: as descriptors of local features in image data, mainly in the context of local orientation, and in geometry where they are used for representing various types of matching constraints or mappings between geometric objects. The tutorial consists of three parts. (1) A mathematical background to what tensors are and why it is reasonable that such a rather abstract mathematical construction should be useful in different fields of physics as well as in image processing and computer vision.Notation issues are also discussed.(2) An overview of tensors for orientation representation, with applications to motion estimation, interest point detection, and image de-noising.Recent developments in this field are extensions of the basic orientation tensors to more complex descriptors, e.g., of multiple orientations or multiple line segments, as well as novel methods for estimating orientation tensors. (3) An overview of tensors in multiple-view geometry and multiple-point geometry.Some recent developments in this field are presented, such as a general framework for constructing both constraint tensors for multiple views/points and mappings for the reconstructing of point/views based on multiple views/points, and minimal parameterizations of such tensors.

Content

  1. Introduction to tensors
    • What are tensors?
    • Why do we need them?
    • Indices or no indices?
    • Operations on tensors
  2. Tensors in image processing
    • Orientation tensors
    • Orientation tensor processing
    • Extensions of orientation tensors
    • Applications
  3. Tensors in geometry
    • Matching constraints, is there a general principle?
    • Reconstruction, not only of points
    • Minimal representations

Tutorial B) Random Field Models for Natural Image and Scene Statistics

Stefan Roth, ( Technical University Darmstadt )

Tuesday, August 30th, 9-12:30

Random Field Models for Natural Image and Scene Statistics

Images, the basic input to any computer vision or biological vision system, span a vast space. As a simple example, there are about 101,000 different 8-bit gray-scale images of a size as small as 20 by 20 pixels. However, most of these images lack any "interesting" structure and are very unlikely to be encountered by an eye or a camera in the real world. Those that do, on the other hand, are loosely tagged as natural images. Though occupying only a tiny fraction of the image space, natural images stand out with particular statistical properties. Other dense scene representations, such as depth or motion, share similar characteristics.

Recently, we have witnessed a surge of interest in modeling the statistics of natural images and scenes with applications ranging from low-level (e.g., denoising, deblurring, stereo, optical flow), over mid-level (e.g., segmentation, color constancy) to high-level vision (e.g., recognition). Random field models have emerged as a powerful tool in this context. The goal of this tutorial is to introduce random fields and their applications to modeling natural image and scene statistics. After reviewing basic statistical properties of images, scene depth and motion, I will discuss a variety random field models, covering the range from early works to the current state of the art, from pairwise to high-order models, and from generative (MRF) to discriminative (CRF) approaches. Finally, inference and applications in various domains will be discussed as well.

Content (slides are available here)

  1. Introduction
    • Motivation and application examples
    • Image and dense scene representations
    • Basic statistical properties
    • Local statistical models
  2. Random field models of image and scene statistics
    • Markov random fields (MRFs)
    • Early random field models
    • Learning and Inference
    • Conditional random fields (CRFs)
    • Survey of recent MRF and CRF models
  3. Applications
    • Image restoration
    • Image-based rendering
    • Flow estimation
    • Stereo
    • Feature learning

Tutorial C) Higher-Order Feature Learning: Building a Computer Vision "Swiss Army Knife"

Roland Memisevic, ( Goethe University Frankfurt )

Tuesday, August 30th, 14-17:30

Higher-Order Feature Learning:
Building a Computer Vision "Swiss Army Knife"

 

In many vision tasks, good performance is all about the right representation. Learning of image features (AKA Sparse Coding or Dictionary Learning) has therefore become a standard approach to many recognition, de-noising and other vision tasks.

While standard feature learning works well on static images, most interesting tasks go beyond these: Problems like video and motion understanding, stereo vision, invariant recognition, etc. do not come in the form of unordered, static images. Instead, it is the relationship between images that carries the relevant information.

Recently, Higher-order Sparse Coding models have emerged to address this issue, and many of these models are currently the best performing methods in tasks involving videos, stereo data, or image pairs. Many of the models were introduced independently and for various different tasks, but they are all based on the same core idea: Sparse codes can act like "gates", that modulate the connections between the other variables in a model. This allows them to represent changes in images and it turns model parameters into "stereo", "mapping" or "spatio-temporal" features.

The tutorial will show how Higher Order Features allow us to learn to "relate" images. It will discuss efficient learning and inference methods and it will present a tour of recent applications. The tutorial will also discuss some connections of these models to biological models of simple and complex cells and to multi-layer and recent deep learning methods.

 

Content:

  1. Introduction
    • Sparse Coding, Feature Learning and Natural Images
    • Learning how Images Change
    • Examples From Stereo, Video and Motion Modeling
  2. Models and Methods
    • Multiplicative Gating
    • Multiplication and Phase Information
    • Relation to Biological Models
    • Relation to Feature Pooling and Deep Learning
  3. Inference And Learning
    • Gated Inference
    • Learning Higher Order Features
    • Effient Spatio-Temporal Learning
    • Historical Perspective
  4. A Tour of Higher order Features in Practice
    • Video and Action Understanding
    • Image Matching
    • Learning Within-Image Correlations
    • Learning for Invariant Classification
    • Learning Stereo Vision

Tutorial D) Convex Optimization for Computer Vision

Thomas Pock ( Graz University of Technology ), & Daniel Cremers ( Technical University of Munich )

Tuesday, August 30th, 14-17:30

Convex Optimization for Computer Vision

Variational methods have had great success to solve many problems in computer vision and image processing. They can be divided into two fundamentally different classes: convex and non-convex problems. The obvious advantage of convex problems is that the allow to compute a global minimum. This means that the quality of the solution solely depends on the accuracy of the variational model. On the other hand, for non-convex problems, the quality of the solution is subject to both the model and the optimization algorithm, since in general only a local minimizer can be computed. The goal of this tutorial is therefore firstly to give a gentle introduction into convex optimization. Secondly, we discuss recent applications of convex optimization to computer vision and image processing problems. We will cover modern techniques such as convex relaxation techniques, primal-dual optimization schemes and real-time capable implementations on the GPU.

Content

  1. Introduction into convex optimization
    • convex sets
    • convex functions
    • least squares problems
    • linear programming problems
  2. Optimization algorithms
    • generic methods (gradient descend, Newton, ...)
    • constrained optimization
    • accelerated gradient methods
    • primal-dual algorithms
    • parallelization on the GPU
  3. Applications
    • image restoration
    • optical flow
    • the Mumford-Shah model
    • minimal partitions and minimal surfaces
    • 3D reconstruction

 

Workshops

 

Workshop 'New Challenges in Neural Computation (NC2)' ...more

Tuesday, August 30th, 9-18:30 (tentative)

 

Symposium of the International Federation of Classification Societies (IFCS) ...more

Tuesday, August 30th, full day (tentative)