It’s not the camera that makes the picture, but the cameraman. Now, it can be creatively reworked after taking the picture with software by changing the depth of field and angle of view. Secondly the computer calculates the original quality from a reduced-resolution image. Researchers presented these solutions at the IBC International Broadcast Convention in Amsterdam.
Showdown for Harry Potter and Voldemort. The last scene with the last duel is finally in the can. You see a close-up of the young wizard while the faces of his enemies are blurred in the background. Unfortunately, they noticed in post-production, they should have focused more on Voldemort’s face. In addition the crew is no longer satisfied with the angle of view. Actually, they should reshoot this scene. Now, researchers at the Fraunhofer Institute for Integrated Circuits IIS in Erlangen, Germany have put their heads together with their colleagues from the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena, Germany to use the light field technique for giving professional film-makers room for adjusting images after shooting in post-production.
The scientists mounted a microlens array in front of a camera sensor developed by researchers of the Fraunhofer Institute for Applied Optics and Precision Engineering.
Each lens of this light field camera records a slightly shifted image of the scene as if several cameras had been aligned. The special thing about these cameras is the fact that they do not only record 2-dimensional images. Instead, they take in a 4-D light field. The sensor records the position, intensity and direction that the beam of light shines in at. These slightly shifted images create various views of a picture that are later processed on the computer. The creative producer in post-production can now decide after filming what depth of focus and angle of view a sequence should have.
These are settings and decisions that the cameraman otherwise has to make when shooting.
Beyond this, the various images from a scene also enable you to create depth maps as with 3-D capturing devices. Arne Nowak is the group leader for computational imaging and algorithms at the IIS. When we caught up with him, he told us who profits from this development: “Anybody who wants to make 3-D pictures with just one single camera. Furthermore, every cameraman and director who wants to subsequently work creatively on the content. Beyond this, users profit who want to extend the scene with depth information such as virtual avatars or other graphic effects”.
This technique is also great for image processing such as in medical technology or industrial inspection for analyzing images.
Make Something Big from Something Small
Furthermore, the researchers based in Erlangen, Germany presented a solution for reconstructing images called non-regular sampling method. This uses a special image sensor with fewer pixels arranged on than on a high-resolution HD sensor. That means, that it only scans the most necessary image information to record a scene. The computer uses the scanning and signal frequency of an image to subsequently recalculate it into high-resolution quality. The data no longer have to be compressed and processed in the camera. That is something that the computer does because it generates images faster and in better quality with its higher computing power. This energy and memory-gulping process is shifted to post-production.
This is the reason why Arne Nowak thinks “this technique is interesting for a lot of media applications where the image is locally recorded fast and you can take your time processing it in post-production”.