Hide menu

High Dynamic Range Video with Applications

Project leader: Jonas Unger

Grant supported from: CENIIT


This project has supported the development of both open source software and open data.

Download HDR-video sequences at: www.hdrv.org

Download the Depends Workflow Management software at: www.dependsworkflow.net

An overview of tone mapping operators for High Dynamic Range (HDR) video: HDR-video TMO evaluation




High Dynamic Range (HDR) imaging is a rapidly developing technology that will replace the 8-bit digital imaging of today. A typical indoor scene exhibits a dynamic range in the order of 1:100,000, and for an outdoor scene the dynamic range can be in the order of 1:10,000,000 or more. This means that conventional cameras with 8-bit quantization are unable to capture such scenes correct, see Figure 1.

Low dynamic range image sequence

Figure 1: displays a set of low dynamic range expsoures captured with varying exposure time to cover the dynamic range of the scene.

An HDR image, usually stored using 32-bit floating point values, is capable of representing the entire dynamic range in the photographed scene, producing a reliable photometric measurement of the radiance registered by the camera. This makes it possible to accurately capture and represent the exact lighting conditions (scene radiance) found in any real world scene. In contrast to conventional images, this enables significantly more advanced: display, computer vision and image post processing algorithms, as well as applications such as computer graphics rendering of photo-realistic images of synthetic objects placed into the captured real world scene. Early adopters of HDR imaging are the movie special effects, computer games, and product and architectural visualization industry.

The long term vision of this project is to: enable the transition to the HDR paradigm and to facilitate HDR technology to a wide range of imaging and computer graphics applications. In order to reach this goal, the project focuses on theoretical research challenges from three different but closely related areas: HDR video Capture and Display, Image Based Lighting (IBL) using HDR light field imaging, and Inverse Global Illumination. A key feature, especially at the early stages of the project, is the development and use of novel and unique HDR video cameras as well as the access to state-of-the-art HDR displays. The project includes both theoretical problems and algorithms as well as the development of capture hardware and software applications.




Based on the theoretical investigations of internal camera properties and the developed algorithms, we investigate methodology for adapting HDR capture, image reconstruction and processing to different applications. For this purpose, we have selected three application scenarios in which the dynamic range is an important factor.  The applications put different requirements on local contrast, dynamic range, and accuracy.


Enhanced HDR image reconstruction

High quality image reconstruction with respect to the spatial and temporal domains (Super-resolution), the spatio-spectral domain (Demosaicing), radiometric response (HDR reconstruction), and optical blur correction (Deblurring) has been well studied as separate problems. In current HDR imaging, these steps are often blindly evaluated in a step-by-step fashion with little or no interaction between them. The development of the next generation algorithms and systems would greatly benefit from an integrated sampling framework that takes into account all these aspects in a formalized way. We use techniques from traditional image filtering (e.g. bilateral filtering and kernel regression) as well as statistical methods to develop models and algorithms specifically for HDR reconstruction, and investigate the different sources of sensor and camera noise. The overall goal is to develop models specifically for HDR imaging systems and investigate a formalized framework for multi-dimensional image sampling.


Multi-sensor camera system

We have developed a formalized framework for image reconstruction based on input from multiple image sensors with different characteristics (resolution, filters, spectral response, etc.). Within this framework, we have implemented a set of HDR reconstruction algorithms which, in contrast to previous methods, carries out the different steps in the “traditional” imaging pipeline (demosaicing, resampling, denoising and HDR-assembly) in a single sampling operation and reconstructs the output pixels using Local Polynomial Approximations (LPA), see papers [7, 5] below. Our approach enables parallelization and our GPU implementations achieve real-time performance on high resolution image data. To increase the image quality, we have extended our algorithms with anisotropic filter supports which adapts to e.g. edges and other image features, see paper [14].

The multi-sensor image reconstruction algorithm described in [7, 14] has also been further developed to enable HDR-reconstruction from input data captured within a single image where the per-pixel gain varies over the image [16].  

DualISO HDR-image reconstruction

The figure illustrates how pairs of lines on the imaging sensor are amplified using different per-pixel gain settings. In digital photography, the pixel gain is commonly referred to as the ISO setting. The algorithm presented in [16] takes this type of dual ISO images as input and reconstructs output HDR images with extended dynamic range. The algorithm has been implemented and tested for consumer cameras from Canon.

Results in this area include:

- A new camera sensor noise model [7].

- A new HDR-image reconstruction algorithm that performs all steps in the traditional imaging pipeline (demosaicking, geometric alignment, resampling, noise-reduction and HDR-assembly) in a single unified sampling operation [5,7,14].

- A new HDR-image reconstruction algorithm for input sensor data where the per-pixel gain varies over the sensor [16]. This algorithm has been implemented and tested for commercial cameras from Canon.

- We have made publicly available a set of unique HDR-video sequences for research and educational purposes. These sequences can be downloaded from: www.hdrv.org

Tone mapping for HDR-video

An important topic in HDR imaging and video is to map the dynamic range of the HDR image and real world to the usually much smaller dynamic range of the display device. While an HDR image captured in a high contrast real scene often exhibit a dynamic range in the order of 5 to 10 log10 units a conventional display system is limited to a dynamic range in the order of 2 to 4 log10 units. The mapping of pixel values from an HDR image or video sequence to the display system is called tone mapping, and is carried out using a tone mapping operator (TMO). Most TMOs rely on models that simulate the human visual system. Over the last two decades, extensive research has been carried out and led to the development of a large number of TMOs. However, only a handful of them consider the temporal domain, that is for HDR-video content, in a consistent and robust way. This lack of HDR-video TMOs is likely due to the (very) limited availability of HDR-video footage. Until recently, artificial and static scenes have been the only material available. Recent development in HDR video capture, however, opens up new possibilities for advancing techniques in the area. Here, we are developing new methods for tone mapping of HDR-video with a focus on processing in the temporal domain.

In this project, we have performed a systematic survey and evaluation of tone mapping operators for HDR-video, see papers [11, 9]. The goal of the study is to compare their performance in visual qualtiy and systematically investigate which problems still need to be solved in order to develop the next generation display algorithms for HDR-video.

Comparison of TMOsFrom our investigation it is evident that existing tone mapping operators behave very differently given the same input data. The plot shows how the output for a single input HDR pixel (dashed line) for five different tone mapping operators (solid lines) vary over a 10 seconds long sequence of 400 HDR input frames. This kind of variaitions introduce artifacts such as flickering and unnatural temporal adaptation.



The video displays an example comparison of six different tone mapping algorithms.

The main results from survey and evaluation of tone mapping operators described in [11, 9], is that we have identified a set of items/problems which need to be addressed in order to develop the next generation tone mapping operators for HDR-video. Based on these observations, we have developed a new tone mapping operator, [24], including a new technique for compressing the dynamic range in the input video sequence and a new fast filtering scheme specifically designed for tone mapping. Our HDR-video tone-mapping operator controls the visibility of image noise, adapts to display and viewing environment, minimizes contrast distortions, preserves or enhances image details, and can be run in real-time on high resolution video without any preprocessing. Our novel contributions are: a fast procedure for computing local display-adaptive tone-curves which minimize contrast distortions, a fast method for detail enhancement free from ringing artifacts, and a production ready, integrated video tone-mapping solution combining all the above features. The new tone mapping operator is demonstrated in the video below.



Another important result in this project is a new algorithm for linearization of non-linear image processing operations in the perceptual domain [18]. The background to the algorithm described in [18] is that many image and video processing operators are non-linear, i.e. the visual effect of the operation is non-linear with respect to the parameters controlling the process.


Linearization in the preceptual domain

The figure illustrates the effect of the linearization. The top row shows the output from a non-linear image processing operation. A parameter controlling the operation is varied in equal steps between each image with increasing effect. The bottom row shows how the same operation hase been linearized. Linearizing the effects of a processing operator makes it possible to, for example, interpolate between parameter configurations by interpolating between already processed images, i.e. without re-computing the in many cases computationally expensive processing operations.


Results in this area include:

- A comprehensive survey of existing tone mappings operators for HDR-video, i.e. operators that include a temporal model for processing of variations in the time domain [11].

- A user evaluation of existing tone mapping methods identifying strengths and weaknesses [9]. A detailed overview of this study and all HDR-video sequences used in the experiments can be found here.

- A new real-time noise-aware tone mapping operator for HDR-video

- A new algorithm for perceptual linearization of image and video processing operators [18].

Photo-realistic image synthesis

computer graphcis rendering


We have developed a pipeline for capture and semi-automatic reconstruction of real scenes based on HDR-video and laser scan point clouds as input. By re-projecting the calibrated HDR-video data onto the recovered geometry it is possible to build a digital model of the real scene that describes both it’s geometry and radiometric properties. This makes it possible to place virtual objects e.g. furniture into the digital scene model and create ultra-realistic computer graphics images so that virtual objects cannot be distinguished from real, see papers [1, 10, 13, 23]. We refer to this as the Virtual Photo Set. In this part of the project, we have also developed methods for photo realistic image synthesis in real-time. This is based on pre-computing parts of the light transport in the scene and approximating this using basis representations that can be used to efficiently solve the computations. Here, we have investigated the use of spherical harmonics basis projections, see paper [6], and basis functions that are adaptively learned from the input data, see papers [3, 12]. We have also developed a real-time rendering framework based on ray-tracing and Monte Carlo integration techniques for efficient sampling during the image synthesis [19].


A real scene, a recovered model and a photo-realistic rendering


The image shows an example from our scene capture, processing and rendering pipeline. (Left) a real photo studio was captured using HDR-video sequences. (Middle) the geometry of the scene was recovered using structure from motion techniques and manually adjusted. The captured HDR-video sequences were then re-projected onto the recovered geometry. (Right) Virtual objects placed into the recovered scene and rendered using global illumination techniques.

Surface materials and the way light scattering is modeled at the surfaces are important aspects in robust and predictable photo-realistic image synthesis. Within this project, we use HDR-imaging techniques to measure surface color and reflectance properties [2]. We are also developing new models for accurate representation of Bi-Directional Reflectance Distribution Functions (BRDF) see [15, 4]. A BRDF is a 4D function describing how light incident from a certain direction at a surface point is scattered in all other directions.

Real-time HDR image processing

The raw data generated from a high end HDR Video system often is in the order of gigabytes per second. This puts high demands on reliable and fast algorithms for data processing. Real-time tone mapping and visualization of the data is another challenge that needs to be addressed. In the future, as imaging sensors will evolve towards even higher spatial and temporal resolution, the demands will increase even more. To process the large amount of data in real-time we have started to investigate methods allowing us to utilize modern GPUs for parallel computation. GPUs are today increasingly common, and variants are already used today in embedded camera systems such as high end mobile phones for conventional video processing (often decompression).

Depends workflow manager Depends workflow manager

Within this project, we  have developed a node-based software framework called a workflow management system [19]. The workflow manager is a modular node based system which organizes the processing into a directed acyclic graph (DAG). Each node is a small software component that is responsible for e.g. controlling a capture device, data I/O, or a specific data processing or visualization task. The generality of the nodes and the system design enable the user to rapidly build a node network customized to the input data and task at hand. The workflow manager is built as a lightweight layer that controls the overall data-flow, keeps track of assets in the system, and manages the execution of specific tasks. The system is designed so that a software component can be turned into a node in the workflow management system with little effort required by the developer. We use this workflow management system for a wide range of applications ranging from scene reconstruction (computer vision) to photo-realistic rendering and image processing.

Node based image processing framework

A key application in this context is fast and flexible processing (image reconstruction, tone mapping, filtering, etc.) of HDR-video data. The image illustrates the components and layout of a typical node network for the tone-mapping of an HDR-image or video sequence. We have made the workflow management system publicly available with open source code at: http://www.dependsworkflow.net


Results in this area include:

- A multi-purpose node-based workflow and processing management system [19].

- Application built inside the workflow management system with functionality for HDR-images and HDR-video that is based on GPU processing.

- Implementations of a large number of tone mappings operators, image reconstruction algorithms, and other filtering operations [11].

- Download the workflow management system at: http://www.dependsworkflow.net


Research platform

This project is carried out within the framework of the Computer Graphics and Image Processing group at the division for Media and Information Technology (MIT), Department of Science and Technology (ITN), Linköping University. The group hosts the MIT Visual Computing Laboratory supporting this CENIIT project with the computational resources and infrastructure needed for experimental setups and the design and development of imaging prototypes. The lab forms a core facility and is integrated with Norrköping Visualization Centre -C. The lab is not only an integral part of the research, it is also in itself a demonstrator for industry collaborations, visits and educational purposes.



[24] Gabriel Eilertsen, Rafał Mantiuk, and Jonas Unger. Real-time noise-aware tone mapping for HDR-video. Accepted for publication in ACM Transactions on Graphics special issue proceedings of SIGGRAPH Asia ’15, 2015.
[23] Jonas Unger, Andrew Gardner, Per Larsson, Francesco Banterle, Capturing Reality for Computer Graphics Applications, Accepted for publication in Proceedings of SIGGRAPH Asia ’15, 2015.
[22] Joel Kronander, Francesco Banterle, Andrew Gardner, Ehsan Miandji, and Jonas Unger, Photorealistic rendering of mixed reality scenes, Computer Graphcis Forum Special Issue Proceedings of Eurographics '15, 34(2): 33 -44, May 2015
[21] Ehsan Miandji, Joel Kronander, and Jonas Unger. Compressive Image Reconstruction
in Reduced Union of Subspaces. Computer Graphics Forum special
issue Proceedings of Eurographics ’15, 34(2): 33 - 44, May, 2015.
[20] Andrew Jones, Jonas Unger, Jay Busch, Xueming Yu, Hsuan-Yueh Peng, Oleg Alexander, and Paul Debevec. Creating a life-sized automultiscopic Morgan Spurlock for CNN’s “Inside Man”. In ACM SIGGRAPH 2014 Talks, August 2014.

Joel Kronander, Johan Dahlin, Daniel Jönsson, Manon Kok, Thomas B. Schön,
and Jonas Unger. Real-time video based lighting using GPU ray-tracing. In Proceedings
of EUSIPCO ’14: Special Session on HDR-video, September 2014


Gabriel Eilertsen, Jonas Unger, Robert Wanat, and Rafal Mantiuk. Perceptually
based parameter adjustments for video processing operations. In ACM SIGGRAPH
2014 Talks, August 2014

[17] Andrew Gardner and Jonas Unger. Depends: Workflow management software
for visual effects production. In Proceedings of DigiPro 2014, August 2014
[16] Saghi Hajisharif, Jonas Unger, and Joel Kronander. HDR reconstruction for
alternating gain (ISO) sensor readout. In Proceedings of Eurographics Short
Papers, Strasbourg, France, 2014

Apostolia Tsirikoglou, Simon Ekeberg, Johan Vikström, Joel Kronander, and Jonas Unger. S(wi)ss: A flexible and robust sub-surface scattering shader. In Proceedings of SIGRAD 2014, June 2014


J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, J. Unger: A Unified Framework for Multi-Sensor HDR Video Reconstruction, Signal Processing: Image Communications, 29(2): 203-215, 2014


J. Unger, J. Kronander, P. Larsson, S. Gustavson, J. Löw, A. Ynnerman: Spatially Varying Image Based Lighting using HDR-Video, Computers & Graphics, Elsevier, Volume 37, Issue 7,  November, 2013.


E. Miandji, J. Kronander, and J. Unger. Learning based compression of surface light fields for real-time rendering of global illumination scenes. {In Proceedings of ACM SIGGRAPH Asia ’13}, Hong Kong, November 2013.


G. Eilertsen, R. Wanat, R. Mantiuk, J. Unger: Evaluation of Tone Mapping Operators for HDR-video, In Computer Graphics Forum Special Issue Proceedings of Pacific Graphics  '13,  Blackwell, Presented at Pacific Graphics, Singapore, 7-9 October, 2013.


J. Unger, J. Kronander, P. Larsson, S. Gustavson, A. Ynnerman: Temporally and Spatially Varying Image Based Lighting, In EUSIPCO '13, Marrakech, Morocco, 9-13 September, 2013.


G. Eilertsen, J. Unger, R. Wanat, R. Mantiuk: Survey and Evaluation of Tone Mapping Operators for HDR-video, In Siggraph '13 talks, Anaheim, USA, 21-25 July, 2013.


E. Miandji, J. Kronander, J. Unger: Learning Based Compression for Real-Time Rendering of Surface Light Fields, In Siggraph '13 posters, Anaheim, USA, 21-25 July, 2013.


J. Kronander, S. Gustavson, G. Bonnet, J. Unger: Unified HDR Reconstruction from RAW CFA Data, In proceedings of the International Conference on Computational Photography (ICCP), 2013, Harvard University, Cambridge, USA, April, 2013.


S. Hajisharif, J. Kronander, E. Miandji, J. Unger: Real-time image based lighting with streaming hdr light probe sequences. In Proceedings of SIGRAD 2012, Växjö, Sweden, November 2012.


J. Kronander, S. Gustavson, J. Unger. Real-time hdr video reconstruction for multi-sensor systems. in ACM SIGGRAPH 2012 Posters, page 65:1 -- 65:1, Los Angeles, USA, August 2012.


J. Löw, J. Kronander, A. Ynnerman, J. Unger: BRDF models for accurate and efficient rendering of glossy surfaces, ACM Transactions on Graphics (TOG), Volume 31, Number 1, page 9:1--9:14 - January 2012.


E. Miandji, J. Kronander, and J. Unger: Geometry independent surface light fields for real time rendering of precomputed global illumination, In Proceedings of Sigrad 2011, November 2011.


G. Eilertsen, P. Larsson, and J. Unger: A versatile material reflectance measurement system for use in production, In Proceedings of Sigrad 2011, November 2011.


J. Unger, S. Gustavson, J. Kronander, P. Larsson, G. Bonnet, and G. Kaiser. 2011. Next generation image based lighting using HDR video, In ACM SIGGRAPH 2011 Talks (SIGGRAPH '11). ACM, New York, NY, USA, Article 60, August 2011.


Page manager: gunnar.host@liu.se
Last updated: Fri Sep 18 11:01:16 CEST 2015