LightFieldSuperresolution - Light Field Superresolution Tom...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Light Field Superresolution Tom E. Bishop Sara Zanetti Paolo Favaro Department of Engineering and Physical Sciences Heriot-Watt University, Edinburgh, UK { t.e.bishop,sz73,paolo.favaro } Figure 1. From left to right: Light field image captured with a plenoptic camera (detail); the light field image on the left is rearranged as a collection of several views; central view extracted from the light field, with one pixel per microlens, as in a traditional rendering [ 23 ]; central view superresolved with our method. Abstract Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods pro- duce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly mod- eling the image formation process and incorporating pri- ors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype. 1. Introduction Recently, we have seen that not only it is possible to build practical integral imaging and mask enhanced systems based on commercial cameras [ 1 , 12 , 23 , 28 ], but also that such cameras provide an advantage over traditional imaging systems by enabling, for instance, digital refocusing [ 23 ] and the recovery of transparent objects in microscopy [ 18 ] from a single snapshot. The performance of such systems, however, has been limited by the resolution of the camera sensor and of the 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Figure 2. Left: One view captured by our plenoptic camera. In Figures 1 and 7 we restore the red and green highlighted regions. Right: The estimated depth map in meters. microlens array. These define, according to the sampling theorem, the tradeoff between spatial and angular resolution of the recovered light field [ 23 , 17 ]. Furthermore, due to diffraction, the image resolution of the system is restricted by the size of the microlenses [ 13 ]. Instead of increasing pixel density, we enhance detail by designing superresolution (SR) algorithms which ex- tract additional information from the available data (see Fig- ure 1 ). More specifically, we exploit the fact that light fields of natural scenes are not a collection of random signals. Rather, they generally satisfy models of limited complex- ity [ 17 ]. A general way to describe the properties of such light fields is via the bidirectional reflectance distribution function (BRDF), e.g ., Ward’s model [ 29 ]. We are inter- ested in exploring different BRDF models of increasing or- 1 der of complexity. In this paper we focus on the Lambertiander of complexity....
View Full Document

This note was uploaded on 04/22/2010 for the course MI IP taught by Professor Vladbalan during the Spring '10 term at Universidad del Rosario.

Page1 / 9

LightFieldSuperresolution - Light Field Superresolution Tom...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online