MITMAS_531F09_lec04_notes - MAS.963: Computational Camera...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
MAS.963: Computational Camera and Photography Fall 2009 Computational Illumination Prof. Ramesh Raskar October 2, 2009 October 2, 2009 Scribe: Anonymous MIT student Lecture 4 Poll: When will Google Earth go live? With the increasing camera capacities, as a figment of imagination one can visualize Google Earth going live. The question is, how long will it take the present day computational photography field, to actually implement this as a free community service for a city? What kind of camera would be best suitable for this? How much of computational power will it require? Some of the arguments that were made during the discussion: (1) “People would not like to compromise their privacy for this” vs “One can always blur out faces and omit sensitive areas from coverage”. (2) “Google does not have enough camera infrastructures to do this” vs “People would be happy to place a webcam out of their window if Google pays them for it”. (3) “Do we have good enough cameras for satellite images, Google trucks cannot serve live feeds everywhere ” vs “Satellite imagery can be used”. Image removed due to copyright restrictions. See video linked at reference [1]. <fig 1: Recent research from Georgia Tech which shows a sample of Google Earth + Augmented Reality to make Google Earth Live [1]>
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
As a consensus it was agreed that although this raises more questions than it answers, the day when one can fire up the browser and actually check whether a kid in California is going to school or not is not far from today. With the latest announcement from researchers from Georgia Institute of Technology [1], that talks about a technology to capture real time videos from multiple locations and perspectives to stitch them together, interpolate and animate (to hide identities) the real time movements on a smaller part of the city; having Google Earth live does not seem to be a distant dream. An interesting project from Tokyo city predicted rain conditions based on data accumulated from wiper movements of hundreds of cars in the region. Hereafter we no longer would need this, we can actually record the rain, snow and hurricane histories over time in Google earth world history archives! “The next decade, is going to be the decade of visual computing” Computational Illumination How can we create programmable lighting that minimizes critical human judgment at the time of capture? And provided incredible control over post- capture manipulation for hyper realistic imagery? Computational illumination [2] is by and large illuminating a scene in a coded, controllable fashion- programmed to highlight favorable scene properties or aid in information extraction. Following parameters of auxiliary photographic lighting are programmable: (1)Presence or Absence – Flash/No-flash (2) Light position – Multi-flash for depth edges – Programmable dome (image re-lighting and matting) (3) Light color /wavelength (4) Spatial Modulation – Synthetic Aperture Illumination
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/07/2012 for the course MAS. 131 taught by Professor Rameshraskar during the Fall '09 term at MIT.

Page1 / 10

MITMAS_531F09_lec04_notes - MAS.963: Computational Camera...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online