By Steve Shaw, Light Illusion
As an industry veteran with experience in all aspects of digital image technology, including hands-on creative post-production operation, production supervision and support, and equipment development and manufacture, I have been involved with stereoscopic 3D for some time now, working on a wide range of projects around the globe.
In this seven-part series, I will take you through the basics of shooting stereoscopic 3D, showing the various methods employed, and the different effects they have on the final stereoscopic 3D image.
The basic requirement for stereoscopic 3D images is the use of two cameras to capture left and right eye images. These are positioned to mimic the human eye’s stereo vision capability by seeing two subtly different angles of the same scene. Our brain takes the two images and creates a sense of depth. If you close one eye, you are not seeing in 3D. You might think you are, because your brain knows certain things about the scene in front of you, but you are not seeing 3D.
The positioning of the cameras in relation to each other (pointing towards or away from each other, distance apart from each other, difference in height, difference in vertical angle, difference in lens, focus, calibration, etc), will affect the resultant image, and if not done correctly the resultant stereo 3D images will be very poor and undesirable.
Historically, one of the issues with stereoscopic 3D capture is that some of these subtle, but unwanted, differences between the two images can cause viewer problems as the eye thinks it is seeing one thing, while the brain believes it is seeing something else.
This is one of the biggest issues with 35mm film capture for stereoscopic 3D, as differences between the two images – left and right eye – can be just too variable, with differences in film movement through the camera mechanism; differences in film processing; differences in grain patterns; not to mention stability and variation issues with film projection.
There have been many attempts to overcome these issues, with developments such as dual-lens 70mm cameras shooting two 35mm frames simultaneously, overcoming the major issues of dual 35mm camera shooting. This method however, adds considerable expense as it involves specialised shooting systems.
Digital cinematography, combined with DI and digital projection, can far more easily overcome these issues with a number of additional benefits.
The Stereo Effect
To match the human stereo vision obviously requires two cameras with their optical centres spaced roughly a couple of inches (approximately 2.5″) apart – just like the human eyes.
However, much like the way that colour and contrast in a film image (digital or celluloid) is exaggerated to enhance the theatrical experience, this interocular distance, or more correctly, interaxial separation, is often increased to exaggerate the stereo effect, and add impact to the viewed image.
The problem though, is that it is not that simple in practice, as the impact of the stereoscopic image can be affected by more than the simple quantity of the stereo effect.
There are lots of ‘rules’ that often get quoted with regard to maximum degrees of convergence and parallax separation.
The rules are there to attempt to stop your eyes having to diverge to look at objects behind the screen. For example, the rules state that diverging past about 1.5 degrees is a bad thing, as your eyes will be trying to move outwards, and it hurts!
However, these rules are not going to be discussed in-depth here, as rules are made to be broken, and many of the images shown here are deliberately not within the rules. The main thing is that it is very easy to see the images being shot, and tell what works and what doesn’t – this is, after all, an image-based industry!
What you do need to know, however, is the following:
Positive parallax = behind the screen – the left eye image is to the left of the right eye image.
Negative parallax = in front of the screen – the left eye image is to the right of the right eye image.
The rules associated with stereo images can be complex, and often unnecessary, but some can be important, such as those for screen sizes.
The first thing to understand is the term ‘stereo budget’. This is used to define the total parallax value within a given scene, from maximum negative (in front of the screen plane) to maximum positive (behind the screen plane).
If the total depth budget is too great, the scene will be difficult to view. In addition, different size screens have different budget allowances.
Maximum positive parallax should, ideally, be equal to the standard human eye pupil separation. In this way the eyes are not forced into divergence, which is what will happen if the positive parallax is greater than the human eye pupil separation. At positive parallax equal to the human eye pupil, the object being viewed will appear to be at infinity.
Maximum negative parallax is slightly different, but should ideally be equal to the maximum positive parallax, which will place the object being viewed half way to the screen.
Negative parallax is actually more flexible, as the viewer can cope with larger changes in negative parallax than for positive. It is after all, much easier to go cross-eyed than the opposite. In reality it is possible to go to 2 or even 3 (or more!) times the suggested value if the story warrants it, and if all other limitations are taken into account.
Measurements of parallax are relative to the screen size and viewing distance, which means that stereo images that work well on a small screen, may not work well when viewed on a large screen.
As a rule of thumb, the following calculation shows the maximum positive parallax for a given screen size:
Human interocular (~2.5″) divided by the screen width, with the result multiplied by the number of horizontal pixels.
So, for a small 24″-wide HD monitor the maximum positive parallax would be:
(2.5/24)x1920 = 200 pixels
But, on a 30ft screen also showing HD, the calculation would be:
(2.5/360) x1920 = 13.3 pixels
The smaller allowed number of pixels is because the pixels on the screen are a lot bigger than on the smaller monitor, but the human eye pupil separation remains the same. This means that the final display plays a big part in what will work, and what will not. One size does not fit all.
In Part 2 of this series, I will cover the two different approaches to convergence.
Steve Shaw is a Partner in Light Illusion, a top consulting service for the digital film market, with offices in the UK and India.