C3V (Challenges and Chances for Computer Vision) Tutorials

The tutorials take place on Tuesday, September 2nd. Details (time, place) will be announced later.
 

 

Tutorial 1, Tuesday, September 2nd
The Hitchhiker's Guide to Biomedical Imaging

Daniel Tenbrinck, École Nationale Supérieure d'Ingénieurs de Caen, France


Biomedical image analysis is an important field in life sciences which strongly depends on the advances in image processing and computer vision. Due to the non-standard imaging techniques used for data acquisition there are a variety of challenging problems to face when working with biomedical data. In order to master typical image analysis tasks, e.g., segmentation or motion estimation, one has to investigate innovative ways to describe the given data and subsequently propose feasible solutions applicable in basic research and daily clinical routine.

This tutorial aims to provide an overview of the challenges of biomedical imaging for computer vision and simultaneously give insightful ideas to deal with problems such as physical noise perturbations, structural artifacts, inhomogeneity and fuzzy edges. We provide a tour through the universe of biomedical imaging and head on this way for popular imaging modalities and their respective characteristics, e.g., ultrasound imaging, positron emission tomography, or fluorescence microscopy. Generally applicable techniques are illustrated on real application data. This tutorial is meant as a useful guide for both researchers already working in this expanding field of computer vision and those who dare to explore this fascinating topic for the first time.

 

Tutorial 2, Tuesday, September 2nd
Throwing Computer Vision Overboard: How to Handle Underwater Light Attenuation and Refraction

Anne Jordt, University of Kiel, Germany
Kevin Köser, GEOMAR Helmholtz Centre for Ocean Research, Germany

 

Besides professional survey cameras mounted to autonomous or remotely operated underwater vehicles, there is now a huge variety of DSLRs or action cameras for divers or even waterproof cell phones for taking photos or videos underwater. It might seem at first sight that putting a camera in a waterproof housing (with a glass port to look outside) would already allow to use it "in the usual way" e.g. for measuring, mapping and reconstruction.

However, there are several challenges that have to be addressed. First of all, due to different optical densities of air, glass and water, light rays can be refracted at the interface ("port"), which then violates the pinhole camera model. On top, the port can act as a lens itself and change the geometry of the focus surface. Second, the apparent transparency of water is valid merely for the blue/green parts of the visible spectrum, while red, infrared and virtually all other electromagnetic radiation is significantly attenuated or blocked completely, leading to distance-dependent color corruption. On top, back scattering and forward scattering of light can degrade image quality if not considered properly.

In this tutorial we provide an overview of the challenges and review approaches to solve them, in particular focused to geometric problems related to imaging models, single camera based structure from motion and mapping. We will start from the basics such that everybody should be able to follow, however for the geometric parts, attendees should have a basic understanding of classical multiple view geometry (standard camera calibration, projection matrix, epipolar geometry and ideally structure from motion).