Hide menu

The Virtual Photo Set

SSF IIS11-0081 - Data driven scene characterization for realistic rendering

 

The Virtual Photo Set

Grant supported by the Swedish Foundation for Strategic Research (SFF) through the Information Intensive Systems programme.
Project leader: Jonas Unger

 List of Publications

 

Project overview

This demonstrator project is motivated by the rapid growth of image-, sensor- and meta-data available on the internet, through databases, and capture/measurements. These huge data sets and input sources make it possible to analyze the world around us in new ways. The information (pixel data) in these images is, in its raw form, unstructured and contain large redundancies and ambiguities. It is of high importance to develop new theoretical tools for structuring these large data sets, and algorithms for extracting semantic information from visually rich scenes.

Traditional photography is only one modality that can be used to represent reality. In this project we take steps towards a richer documentation of scenes and indeed also provide the tools needed to start to modify and augment the captured reality. The Virtual Photo Set (VPS) will allow users to blend real and virtual objects and generate photo realistic images in which the real and virtual objects are indistinguishable. It will also allow the users to edit the captured reality to alter such things as light sources, object placement and materials etc.  The VPS is a natural step in the on-going paradigm shift of the way in which we capture and document reality.

The project includes the development of novel: new multi-modal scene capture hardware, processing techniques for extraction of semantic scene information and scene reconstruction, as well as algorithms for compression and efficient image synthesis. The demonstrator developed within this project is built on a strong foundation of theoretical research, and encompasses all steps from scene capture and processing, to interactive editing and photo-realistic image synthesis.

In-depth overviews of results and applications with open data and source code from this project can be found at:

http://www.hdrv.org (HDR video capture, scene reconstruction and image based lighting)

Virtual Photo Sets Siggraph Asia 2015 course material (data and extensive course notes)

http://lumahdrv.org (An open source perceptually based HDR video codec)

Project team

This demonstrator project is running during 2012 - 2016, and is carried out within a collaboration between:

The Computer Graphics and Image Processing Group (MIT),   Linköping University

Computer Vision Laboratory (CVL), Linköping University

Sensor Fusion Group, Automatic Control (AC), Linköping University

 

The following researchers and engineers are working within the project:

Senior researchers:

Prof. Anders Ynnerman (MIT)

Prof. Michael Felsberg (CVL)

Prof. Fredrik Gustafsson (AC)

Prof. Reiner Lenz (MIT)

Dr. Jonas Unger (MIT)

Dr. Per-Erik Forsen (CVL)

Gabriel Eilertsen (MIT)

Saghi Hajisharif (MIT)

Joel Kronander (MIT)

Ehsan Miandji (MIT)

Hanna Nyqvist (AC)

Hannes Ovren (CVL)

Andrew Gardner (MIT)

Per Larsson (MIT)

Giulia Meneghetti (CVL)

Apostolia Tsirikoglou (MIT)

 

 

Publications

 

Information from an external source should have been displayed here but an error occured. Please contact the person responsible for this page. (Error Connect to dmforge.itn.liu.se/130.236.132.82:80 timed out)

Events at MIT


Page manager: gunnar.host@liu.se
Last updated: Fri Mar 25 19:37:05 CET 2016