The Basics of Shooting Stereoscopic 3D – Part 2: Convergence
By Steve Shaw, Light Illusion
Last month in The Basics of Shooting Stereoscopic 3D – Part 1 we looked at the basic theory of stereoscopic 3D, at screen rules and some of the terminology. We covered the differences between analogue and digital recording, factors that can affect the stereo image, positive and negative parallax, and the stereo budget. This month I will discuss convergence. Convergence is an interesting subject, as is has two very different approaches, with different reasons for choosing one rather than the other.
Convergence
The convergence point – the point at which the left and right eye images align – sets the location of convergence at the screen plane when the image is projected, with objects in front of this point appearing to be in front of the screen, and those behind appearing to be beyond the screen.

To see the 3D stereoscopic images in this article you will need a set of red/cyan glasses (red for left eye, cyan for right). If you don't have access to such glasses (which can be easily ordered online from www.3DglassesShop.com), the point of convergence is where the two images align. You can see this point without the use of glasses - but it can be a lot harder to decipher the entire image.
When the two camera images of a subject are superimposed on top of each other and are aligned, the subject in question has zero parallax, and will appear to be at the same distance from the viewer as the screen onto which it is being projected.

This first image shows convergence on the camera rig, with virtually all the other objects appearing to be behind the screen plane. (Image courtesy of Pietro Carlomagno www.carlomagno3d.it taken from the set of Inferno)
To better see the point of convergence, place your cursor on the image, as this will always be located at the screen plane. You can then quickly see which objects are in front or behind the screen plane.

This image has the convergence point set on the crew pushing the camera dolly, with the camera, dolly and camera operator in front of the screen plane. (Image courtesy of Pietro Carlomagno <www.carlomagno3d.it> taken from the set of Inferno).

This image has the point of convergence set at the rear wall, with all other objects within the image appearing in front of the screen plane. (Image courtesy of Pietro Carlomagno taken from the set of Inferno)
The above images illustrate the concept of image convergence and its link with the screen plane. But this is not the end of the story!
Parallel versus converged
There are two main approaches to shooting stereoscopic images with two cameras; parallel and converged.
Parallel shooting
When shooting parallel, the most distant object is on the screen plane – there is nothing behind (or in positive parallax). This means that the scene is in negative parallax, since the left eye image is to the right of the right eye image, and vice versa.
To illustrate this, hold your hand in front of you, with a single finger pointing up. Focus on a point in the distance, but note that you can now see two fingers (while staying focused on the far object). If you close your left eye, the finger image to the right will vanish, showing that the right eye (the open one) is seeing the finger image to the left.
When shooting converged, the cameras are ‘toed-in’, or converged, such that both cameras point at the object to be placed on the screen plane. This reduces the amount of post-production potentially required to make the image viewable, but with the danger that the introduction of excessive parallax (having the left and right eye images too far apart from each other) can make the image virtually impossible to see throughout its depth range.
Shooting parallel means it is all but impossible to have excessive parallax if the interocular distance (camera separation) is not excessive, but there will be a need to horizontally re-position the two camera images in post to set the desired screen plane (zero parallax) position.

This image is the natural result of shooting parallel, with no post-production re-positioning.
In the above picture, all of the image is in front of the screen plane as the whole image has negative parallax (the left eye image is to the right of the right eye image).
For situations where the image is to be viewed on a ‘normal’ sized theatrical screen, or monitor, this negative parallax causes a lot of problems, as the image appears to be ‘cut’ by the screen edge (or window), when the perspective of the image suggests that it exists in front of the screen. This is due to one eye’s image being cut-off before the other eye (this will be covered in Part 3 under Floating Windows). When viewing on a far larger screen, such as IMAX, this is not a problem as the screen edges are outside of the viewer’s normal field of view, so the issue is not seen by the viewer. Parallel shooting is the standard for IMAX projects.

This image shows the same parallel shot material horizontally re-positioned in post to place all the image behind the screen window.
For smaller screen sizes, this is a much easier image to see, and reduces dramatically the likelihood of induced headaches.
The above images also show the differences between negative and positive parallax, and the position of the image relative to the screen plane.
Converged shooting
As already mentioned, when shooting converged, the cameras point at the object to be placed on the screen plane. This approach can also mean that it may be necessary to rack convergence dynamically through a shot while shooting, in order to maintain the point of interest on the screen plane, similar to racking focus in 2D cinematography.
Changing convergence, either dynamically, or on a shot-by-shot basis during production, can add a lot of time to a shoot. In addition, there are other potential issues if the camera or object is moving close or further away from the camera.
Racking convergence can have some very unexpected effects on the viewer’s perception of the image, since the object can appear to get larger or smaller relative to its original perceived size due to the amount of stereo effect changing.
A further issue with converged shooting is that each camera will suffer different keystone convergence, or vertical parallax distortion, making the images hard for a viewer to ‘realise’ visually if these keystoning effects are not fixed during post-production. And this keystoning distortion will increase with larger camera separations (interocular distances).

Effect of different keystone convergence (or vertical parallax distortion) for each camera.
This first set of images below shows a scene shot with convergence. The second set, the same image shot parallel. These images do not need glasses; they are showing the left eye image, but if you hover your mouse over each of them, the matched right eye image will be seen.
Scene shot converged.
Scene shot parallel.
If you look closely you can see more ‘rotation’ on the back wall in the images shot with convergence, which will give rise to vertical parallax offsets as the side of the wall that rotates away from the camera gets smaller, while the side that rotates towards the camera gets larger – but the fact you have to look very closely to see this shows the level of error we are talking about – and I deliberately shot a very wide interocular distance for these images!
If this keystoning problem is severe, as it can be, it will require correction via post-production, and can be very time consuming to fix. However, more often the error is small (as seen above), and is not noticeable in the real world.
The alternative method to setting convergence in-camera is to shoot parallel, and set the convergence point in post. This obviously has the benefit of speeding shooting during production, and combined with the usual need to shoot a deep depth of field (hyper-focus) can improve shot throughput on-set dramatically, leaving just interocular distance to be changed on a shot-by-shot basis.
It should be noted however, that the parallax of each camera relative one to the other is fixed (the amount of difference in an object the left eye/camera sees compared to the right) and changing the convergence point in post is making an unnatural change as the relative parallax isn’t also changed. With small changes this isn’t a problem, but if the change is large it can lead to an unnatural image, which again will break the suspension of disbelief.
There is also the issue that re-aligning the left and right eye images means bringing them closer together, or further apart, so causing blanking problems at the edges that require re-sizing and cropping of the images.
This can be seen in the following images where no attempt has been made to mask the blanking problems caused by changing convergence in post. However, allowing for such post alignment when shooting is not difficult, and can often be easier and more cost-effective than the time taken to setup camera convergence on-set.
Weighing one option against the other is the only way to make a selective judgment on the best way to proceed.
The following images show different examples of the same basic scene shot using firstly, two cameras set to converge on a single point, and below, using a parallel camera setup.

The above image had convergence set by aligning the toe-in of the cameras to the first newel post, with some slight adjustment in post to overcome small shoot inaccuracies (the issue with the close-up banister rail will be covered later in this series).

This image shows parallel shooting with convergence set in post.
Allowing for alignment differences, the actual appearance of the two images is very similar, and it is difficult to see any real variations in convergence or parallax.
Conclusion
All things considered, I prefer the parallel approach because the positives outweigh the negatives – especially when the potential issues of keystoning and time lost during shooting are taken into consideration. But if there were time to toe-in the cameras on-set, it would probably be worth considering.
Steve Shaw is a Partner in Light Illusion, a top consulting service for the digital film market, with offices in the UK and India.
Comments
The Basics of Shooting Stereoscopic 3D – Part 2: Convergence — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>