Vision_Notes_3 - III. Image formation: geometrical optics...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
40 III. Image formation: geometrical optics Surfaces typically reflect incident light in all directions (unless they happen to be perfectly specular, like perfect mirrors). For example, Lambertian surfaces reflect light equally in directions. This fact poses a serious obstacle to obtaining useful visual information from the environment, and undoubtedly it greatly slowed the evolution of vision in biological organisms. To see this, consider a piece of plane white paper and imagine that you have attached to each point on the paper a small sensor (a receptor or a photocell). As the paper is turned in various directions the light falling on the sensors goes up and down, but for each direction the paper is pointed the light is very nearly uniform across all the sensors. Clearly the responses of these sensors would supply very little information about objects in the environment. The problem, of course, is that light is reflected in all directions and hence each sensor receives light from every object in front of it. Useful vision requires that the light waves be sorted out so that the light from each direction falls on a unique region of the sensor array. This sorting process is called image formation. Evolution has produced two general types of solutions to the problem of image formation that will be described below. A. Visual scene as a collection of point sources To understand image formation it is useful to note that the visual scene can be regarded as a collection of point sources ; a point source is a infinitesimal light source that emits light in all directions. B. Two point sources The problem of image formation is illustrated in Figure 3.1A for the special case where there are just two small objects (point sources) in the environment. The light waves from the two point sources are represented by concentric circles. As can be seen, even in this simple case, the light from the two point sources is completely confounded in the sensor array. Clearly something must be done if separate images of the two point sources are to be formed on the sensor array. C. Imaging with pin-holes One solution is to place the sensor array behind an opaque surface that has a small aperture (a pin-hole) in it. The principle is illustrated in Figure 3.1B. Because of the pinhole, the light waves from the point sources fall on different parts of the sensor array. This method of image formation can produce moderate resolution images (pin hole cameras were popular several decades ago) and the method is found in some primitive biological vision systems (e.g., the Nautilus eye).
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
41 Figure 3.1 A B C However, pinhole imaging has some serious weaknesses. It is clear from Figure 8B that the smaller the pin hole the more localized will be the images of the two point sources and hence the better will be the resolution of images in general. Unfortunately, shrinking the size of the aperture only works up to a certain point. Once the aperture becomes smaller than some critical size, diffraction of the light waves by the aperture
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/12/2008 for the course PSY 380E taught by Professor Geisler during the Fall '07 term at University of Texas at Austin.

Page1 / 17

Vision_Notes_3 - III. Image formation: geometrical optics...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online