This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 78 V. Image formation: Fourier optics The use of linear-systems techniques to measure and describe optical systems is called Fourier optics (as opposed to geometrical optics). As mentioned earlier, these techniques are useful for understanding optical systems (such as the human eye) where diffraction effects are significant. The laws of physics and experimental measurement show that lens- and aperture-based optical systems (like the human eye and most cameras) act as linear systems that transform the two-dimensional light distribution at an object plane into a two-dimensional light distribution on the imaging surface. To be concrete, let i(x, y) be the luminance distribution of the face of a monochromatic display screen and let o(x, y) be the illuminance distribution on the imaging surface (e.g., the layer of photoreceptors in the retina), where the coordinates x, y are expressed as visual angles. All of the optical processing between i(x, y) and o(x, y) behaves as a linear system, L . Most optical systems (including the human eye) are shift-invariant only within limited regions of the visual field (usually annular regions around the optic axis). Thus, in order to use most of the properties of linear systems analysis described above, it is necessary to divide the visual field into a number of regions so that over each region the optical system can be treated as an LSI system. The size that these regions need to be can be determined by experiment. Consider image formation in one of the regions that can be described as an LSI system. Using the convolution formulas (equation 4.12 or equations 4.43 and 4.44), the image can be predicted for any picture on the display screen, once the impulse-response function, h(x, y), is known. In imaging systems, the impulse-response function is the image formed by a unit energy point source. This optical impulse-response function is often referred to as the point-spread function . Amazingly, the single point-spread function accurately represents the optical effects of all the apertures and refracting surfaces in the system, including all the (monochromatic) aberration and diffraction effects. Most display screens (and natural environments) do not produce monochromatic images. Strictly speaking, because of chromatic aberrations, a different point-spread function is required for each different monochromatic wavelength. However, for display screens, it is usually accurate enough to use a single point-spread function for each phosphor. This point-spread function represents the composite of the point-spread functions for all the wavelengths emitted by the phosphor. A. Direct measurement of the point-spread function The most direct method of measuring a point-spread function involves physically recording the light distribution on the imaging surface produced by a precisely specified input object. The simplest procedure conceptually would be to measure the light 79 distribution produced by a point source object. However, this is usually not the best distribution produced by a point source object....
View Full Document
- Fall '07
- display screen, point-spread function, Gaskill