This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 78 V. Image formation: Fourier optics The use of linearsystems techniques to measure and describe optical systems is called Fourier optics (as opposed to geometrical optics). As mentioned earlier, these techniques are useful for understanding optical systems (such as the human eye) where diffraction effects are significant. The laws of physics and experimental measurement show that lens and aperturebased optical systems (like the human eye and most cameras) act as linear systems that transform the twodimensional light distribution at an object plane into a twodimensional light distribution on the imaging surface. To be concrete, let i(x, y) be the luminance distribution of the face of a monochromatic display screen and let o(x, y) be the illuminance distribution on the imaging surface (e.g., the layer of photoreceptors in the retina), where the coordinates x, y are expressed as visual angles. All of the optical processing between i(x, y) and o(x, y) behaves as a linear system, L . Most optical systems (including the human eye) are shiftinvariant only within limited regions of the visual field (usually annular regions around the optic axis). Thus, in order to use most of the properties of linear systems analysis described above, it is necessary to divide the visual field into a number of regions so that over each region the optical system can be treated as an LSI system. The size that these regions need to be can be determined by experiment. Consider image formation in one of the regions that can be described as an LSI system. Using the convolution formulas (equation 4.12 or equations 4.43 and 4.44), the image can be predicted for any picture on the display screen, once the impulseresponse function, h(x, y), is known. In imaging systems, the impulseresponse function is the image formed by a unit energy point source. This optical impulseresponse function is often referred to as the pointspread function . Amazingly, the single pointspread function accurately represents the optical effects of all the apertures and refracting surfaces in the system, including all the (monochromatic) aberration and diffraction effects. Most display screens (and natural environments) do not produce monochromatic images. Strictly speaking, because of chromatic aberrations, a different pointspread function is required for each different monochromatic wavelength. However, for display screens, it is usually accurate enough to use a single pointspread function for each phosphor. This pointspread function represents the composite of the pointspread functions for all the wavelengths emitted by the phosphor. A. Direct measurement of the pointspread function The most direct method of measuring a pointspread function involves physically recording the light distribution on the imaging surface produced by a precisely specified input object. The simplest procedure conceptually would be to measure the light 79 distribution produced by a point source object. However, this is usually not the best distribution produced by a point source object....
View
Full Document
 Fall '07
 GEISLER
 display screen, pointspread function, Gaskill

Click to edit the document details