By Alan Chalmers, goHDR and WMG, University of Warwick
There is a wide range of colours and lighting intensities in the real world. The human eye is capable of seeing detail from moonlight to bright sunshine. As our eyes are constantly moving and adapting, in any scene we have a dynamic contrast ratio of nearly 1,000,000:1 (which equates to about 20 f-stops). Traditional imaging techniques, on the other hand, are incapable of accurately capturing or displaying such a range of lighting.
The areas of the image outside the limited range, or Low Dynamic Range (LDR) of traditional cameras and displays, are either under- or over-exposed. High Dynamic Range (HDR) imaging technologies are an exception. HDR can capture and deliver a wider range of real-world lighting to provide a significantly enhanced viewing experience, for example the ability to clearly see the racing car as it enters and leaves a tunnel.
The ability of HDR images more accurately to capture and display the wide range of light intensity levels found in real scenes, also allows HDR to provide a level of depth perception that is not available with LDR technologies. In particular, HDR can provide two important monocular cues for depth perception: aerial perspective and light-object interaction. These monocular cues naturally complement the binocular cues of 3D-HDR.
How HDR Works
HDR techniques merge single-exposure LDR images to create a picture that corresponds to our own vision, and thus meet our innate expectations. Although HDR for still photography has been around for several decades, HDR video cameras are only just emerging. The University of Warwick and SpheronVR were instrumental in developing the world’s first HDR video camera capable of capturing 20 f-stops in full high-definition resolution (1920×1080 pixels) at 30 frames per second [CBB*09].
Where the dynamic range of the display matches that of the captured lighting, the images can be displayed in a straightforward manner [SHS*04]. If this is not the case, then the images need to be mapped to the lower dynamic range, known as tone-mapping, to ensure an enhanced viewing experience that preserves the perceptual appearance of the real scene, on the available display [BAD*11].
Our world is three-dimensional. Humans are able to perceive depth through a variety of monocular and binocular cues. Of the monocular cues, aerial (or atmospheric) perspective, in a real scene is caused by the scattering of light by the atmosphere. This results in objects that are further away from the viewer appearing to have a lower contrast with their background and less saturated colours. The way in which light interacts with objects, including its reflective properties and the shadows it casts, is also an important monocular cue for our brain to determine the shape of the object and its position within the scene. By delivering more accurate lighting, HDR images can provide these monocular cues.
Binocular cues provide depth perception when a real scene is viewed with both eyes. Stereoscopy or 3D, is the imaging technique which enables or improves the illusion of depth in a 2D image by simulating the use of two eyes in a real scene. This is achieved by presenting two offset images to each of the viewer’s eyes.
3D-HDR, also known as Stereoscopic High Dynamic Range (SHDR), has the potential of bringing the diverse HDR and 3D technologies together, exploiting the advantages of both. This novel imaging method with an unprecedented level of realism has the potential to deliver both improved depth perception and a realistic representation of the scene lighting. There is even an added advantage of 3D-HDR when using glasses to deliver the 3D content: HDR images are much brighter than LDR images, so that the loss of contrast from the glasses is less noticeable.
The major challenge with HDR video is the huge amount of data that is generated. By using 32 bits to represent each colour channel, a single uncompressed HDR frame at HD resolution requires 24MB. Capturing at 30 frames per second generates approximately 42GB for a minute of footage. This is equivalent to a CD worth of data a second. 3D-HDR only makes the problem worse. The key to HDR (and thus 3D-HDR) being successful is compression. At goHDR, we have developed a novel compression algorithm, based on in-depth knowledge of HDR, that is able to achieve at least 150:1 compression ratios with a <2% perceptual loss compared to the real scene. This enables all lighting information to be fully preserved and passed from capture to display on existing ICT (Information and Communication Technology) infrastructure. Applications
3D-HDR video enables previously unattainable situations to be faithfully recorded and displayed. In addition to the obvious benefits to the film and television industries, such as the ability to clearly see the soccer ball as it is kicked from the sunshine into the shadow of the stadium, or the advertising board which is in the shadow in a tennis match etc., there are a number of niche applications as well. These include the filming of surgical operations, with its range of lighting, from the dark deep body cavities to the reflections of the bright operating theatre lights on the metal medical instruments, and security applications, especially in extreme lighting conditions.
3D-HDR provides the authentic lighting of HDR with the enhanced depth perception of 3D. Compression algorithms, such as that developed by goHDR, allow the huge data requirements of HDR video to be handled on existing infrastructure now.
There is no doubt that HDR is coming. For example, major players such as Sony have already announced that HDR will form part of their next generation of TVs [Sony-TV11], and their next generation of camera sensors have HDR video capability [Sony-HDRvideo12].
HDR complements existing 3D technologies and removes the problem of under- or over pixels. The S3D industry could benefit significantly from embracing HDR as an integral part of future 3D developments.
[CBB*09] Chalmers A.G., Bonnet G., Banterle F., Dubla P., Debattista K., Artusi A., Moir C. A High Dynamic Range video solution SIGGRAPH Asia 2009 Emerging Technologies, Yokohama, December 2009.
[SHS*04] Seetzen H., Heidrich W., Stuerzlinger W., Ward G., Whitehead L., Trentacoste M., Ghosh A., Vorozcovs A. “High Dynamic Range Display Systems”, In SIGGRAPH 2004, 2004.
[BAD*11] Banterle F., Artusi A., Debattista K., Chalmers A.G. Advanced High Dynamic Range Imaging, AKPeters, March 2011.
Alan Chalmers is a Professor at WMG, University of Warwick and a founder of goHDR Ltd, a software business with the aim of being the leader in the software that enables HDR video.