Hide menu

Evaluation of tone mapping operators for HDR-video

Contact: Jonas Unger

This is the project web page for the Pacific Graphics 2013 and Computer Graphics Forum paper:

Gabriel Eilertsen, Robert Wanat, Rafal Mantiuk, Jonas Unger:
Evaluation of tone mapping operators for HDR-video, In Computer
Grahpics Forum Special Issue Proceedings of Pacific Graphics, Singapore,
7-9 October, 2013.

Here you will find an overview of the project, examples of tone mapped HDR-videos and can download all HDR-video sequences and all tone mapped sequences used in the experiments. Link to paper: paper preprint (.pdf)

Please note that you can find additional HDR-video sequences at our main resource page at: http://www.hdrv.org

 


Abstract

Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.

 

TMO output in the temporal domain

The figure shows an illustration of how a set of different tone mapping operators behave in the temporal domain at a single pixel in the input sequence. The input sequence consits of 100 HDR frames. The variation in the input HDR-pixel considered is visualized by the dashed red line. Although the input varies relatively slowly over time, the output from the tone mapping operators differ significantly.

Overview

One of the main problems in high dynamic range (HDR) imaging and video is mapping the dynamic range of the HDR image to the usually much lower dynamic range of the display device. While an HDR image captured in a real-life scene often exhibits a dynamic range in the order of 5 to 10 log10 units, most conventional display systems are limited to a dynamic range in the order of 2 to 4 log10 units. Most display systems are also limited to quantized 8-bit input. The mapping of pixel values from an HDR image or video sequence to the display system is called tone mapping, and is carried out using a tone mapping operator (TMO).

Over the last two decades an extensive body of research has been focused around the problem of tone mapping. A number of approaches have been proposed with goals ranging from producing the most faithful to the most artistic representation of real world intensity ranges and colors on display systems with limited dynamic range. In spite of that, only a handful of the presented operators can process video sequences. This lack of HDR-video TMOs can be associated with the (very) limited availability of high quality HDR-video footage. However, recent developments in HDR video capture, open up possibilities for advancing techniques in the area.

Extending tone mapping from static HDR images to video sequences presents new challenges as it is necessary to take into account both the spatial domain as well as the temporal domain. In this project, we set out to identify the problems that need to be solved in order to  enable the development of next generation TMOs capable of robust processing of HDR-video.

The main contribution of the paper is the systematic evaluation of TMOs designed for HDR-video. The evaluation consists of three parts: a survey of the field to identify and classify TMOs for HDR-video, a qualitative experiment identifying strengths and weaknesses of individual TMOs, and a pair-wise comparison experiment ranking which TMOs are preferred for a set of HDR-video sequences.  Based on the results from the experiments, we identify a set of key aspects, or areas, in the processing of the temporal and spatial domains, that holds research problems which still need to be solved in order to develop TMOs for robust and accurate rendition of HDR-video footage captured in general scenes.

For more details, we refer to the paper: paper preprint (.pdf)

 

HDR-video sequences and tone mapping examples

In our evaluation, we included 11 TMOs  (see the full list of TMOs, or find more details in the paper: paper preprint (.pdf)).

Below you can download all HDR-videos used in the evaluation and view side by side comparions of how all the 11 TMOs perform for each sequence. In order to manage storage and downloads, the HDR-videos have been down-sampled from the full HD resolution used in the experiments to 720p, i.e. 1280x720 pixels, and stored as OpenEXR frames. All HDR-sequences and other information found below may be used freely under the terms of the creative commons license CC BY-NC 3.0.

Please note that you can find additional HDR-video sequences at our main resource page at: http://www.hdrv.org

 


HDR-video sequence 1: Campus Norrköping Window

Campus window HDR-video sequence

 

 

 

 

 

 

 

 

Sequence information:

This HDR-video sequence was captured at Campus Norrköping in Sweden. The sequence was radiometrically calibrated using a Photo Research PR-650 photospectrometer. The sequence was captured using the imaging setup and algorithms described in:

J. Kronander, S. Gustavson, G. Bonnet, J. Unger: Unified HDR Reconstruction from RAW CFA Data, In proceedings of the International Conference on Computational Photography (ICCP), 2013, Harvard University, Cambridge, USA, April, 2013.

J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, J. Unger: A Unified Framework for Multi-Sensor HDR Video Reconstruction, Accepted for publication in Signal Processing: Image Communications, 2013
 
 
 
 

 

Download HDR-video sequence as OpenEXR frames (720p): download here.
Campus Norrköping Window - Tone mapped examples (best viewed in full screen)

 

 

 


HDR-video sequence 2: Campus Norrköping Hallway

Hallway HDR-video sequence
 
 

 

 

 
 
 
Sequence information:

This HDR-video sequence was captured at Campus Norrköping in Sweden. The sequence was radiometrically calibrated using a Photo Research PR-650 photospectrometer. The sequence was captured using the imaging setup and algorithms described in:

J. Kronander, S. Gustavson, G. Bonnet, J. Unger: Unified HDR Reconstruction from RAW CFA Data, In proceedings of the International Conference on Computational Photography (ICCP), 2013, Harvard University, Cambridge, USA, April, 2013.

J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, J. Unger: A Unified Framework for Multi-Sensor HDR Video Reconstruction, Accepted for publication in Signal Processing: Image Communications, 2013
 
 
 
 

 

Download HDR-video sequence as OpenEXR frames (720p): download here.

Campus Norrköping Hallway - Tone mapped examples (best viewed in full screen)


 

 

 


HDR-video sequence 3: Campus Norrköping Hallway 2

Hallway2 HDR-video sequence
 
 
 
 
 
 
Sequence information:

This HDR-video sequence was captured at Campus Norrköping in Sweden. The sequence was radiometrically calibrated using a Photo Research PR-650 photospectrometer. The sequence was captured using the imaging setup and algorithms described in:

J. Kronander, S. Gustavson, G. Bonnet, J. Unger: Unified HDR Reconstruction from RAW CFA Data, In proceedings of the International Conference on Computational Photography (ICCP), 2013, Harvard University, Cambridge, USA, April, 2013.

J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, J. Unger: A Unified Framework for Multi-Sensor HDR Video Reconstruction, Accepted for publication in Signal Processing: Image Communications, 2013
 
 
 
 

 

Download HDR-video sequence as OpenEXR frames (720p): download here.

Campus Norrköping Hallway2 - Tone mapped examples (best viewed in full screen)


 

 

 


HDR-video sequence 4: Students

Students HDR-video sequence
 
 
 
 
 

 
Sequence information:

This HDR-video sequence was captured at Campus Norrköping in Sweden. The sequence was radiometrically calibrated using a Photo Research PR-650 photospectrometer. The sequence was captured using the imaging setup and algorithms described in:

J. Kronander, S. Gustavson, G. Bonnet, J. Unger: Unified HDR Reconstruction from RAW CFA Data, In proceedings of the International Conference on Computational Photography (ICCP), 2013, Harvard University, Cambridge, USA, April, 2013.

J. Kronander, S. Gustavson, G. Bonnet, A. Ynnerman, J. Unger: A Unified Framework for Multi-Sensor HDR Video Reconstruction, Accepted for publication in Signal Processing: Image Communications, 2013
 
 
 
 

 

Download HDR-video sequence as OpenEXR frames (720p): download here.

Campus Norrköping Hallway2 - Tone mapped examples (best viewed in full screen)


 

 

 


HDR-video sequence 5: Driving at night

Driving at night rendered HDR-video sequence
 
 
 
 
 

 

 
Sequence information:

This sequence was rendered by Josselin Petit as a part of the project "Assessment of video tone-mapping: Are cameras' S-shaped tone-curves good enough?". The driving simulator software is the courtesy of LEPSIS (part of IFSTTAR). The light source intensities and materials were modelled to closely resemble the radiometric characteristics of a night scene.

The sequence is available under the Creative Commons Attribution license (CC BY). If you wish to use this video sequence in your research, please cite the paper:

Petit, J., & Mantiuk, R. K. (2013). Assessment of video tone-mapping : Are cameras ’ S-shaped tone-curves good enough? Journal of Visual Communication and Image Representation, 24, 1020–1030. doi:10.1016/j.jvcir.2013.06.014

To attribute this work in other material, please use the text:

"Driving at Night (C) 2012 Josselin Petit, rendered for the purpose of the "Assessment of video tone-mapping" project: http://pages.bangor.ac.uk/~eesa0c/projects/video_tmo_cmp/."

Download HDR-video sequence as OpenEXR frames (720p): driving_simulator.zip (442 MB)

Campus Norrköping Hallway2 - Tone mapped examples (best viewed in full screen)



 

 


HDR-video sequence 6: C Exhibition area

Exhibition area HDR-video sequence
 
 
 
 
 
 
Sequence information:

This sequence was captured using an RED EPIC camera set to HDR-X mode and was radiometrically calibrated using a Photo Research PR-650 photospectrometer.

Download HDR-video sequence as OpenEXR frames (720p): download here.

C Exhibition area - Tone mapped examples (best viewed in full screen)


 

 


Tone mapped sequences

In our evaluation we included 11 TMOs  (see the full list of TMOs, or find more details in the paper: paper preprint (.pdf)). Below you can download all tone mappped sequences used in the evaluation. Each .zip file below contains all of the above HDR-sequences tone mapped with the TMO described by the file name:

Visual adaptation TMO (Ferwerd96) (230 MB)

Time adaptation TMO (Pattanaik00) (213 MB)

Local adaptation TMO (Ledda04) (270 MB)

 Mal-adaptation TMO (Irawan05) (212 MB)

Virtual exposures TMO (Bennett05) (181 MB)

Cone model TMO (Hateren06) (166 MB)

Display adaptive TMO (Mantiuk08) (242 MB)

Retina model TMO (Benoit09) (213 MB)

Color appearance TMO (Reinhard12) (136 MB)

Temporal coherence TMO (Boitard12) (150)

Camera TMO (234 MB)

 

List of tone mapping operators included in the evaluation

For more details and references, we refer to the paper: paper preprint (.pdf)

The TMOs included in the evaluation.



 

Acknowledgements

This project was funded by the Swedish Foundation for Strategic Research (SSF) through grant IIS11-0081,  the Swedish Research Council through the Linnaeus Environment CADICS, Linköping University Center for Industrial Information Technology (CENIIT), and COST Action IC1005 on HDR video.

The code of the Mal-adaptation TMO is the courtesy of Josselin Petit. Display Adaptive TMO is a part of the pfstools software [http://pfstools.sourceforge.net/]. The Night Driving sequence was generated using the driving simulator software, courtesy of  LEPSIS (part of IFSTTAR).

 


Page manager: gunnar.host@liu.se
Last updated: Fri Oct 18 22:53:29 CEST 2013