This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 100 Lecture 2425 Applications of eigen vectors and eigen values: 1. KarhunenLoeve Transform (KLT): In many image processing applications, one would like to transmit images (512 x 512) over a noisy channel to a remote receiver. In such cases the information contained in an image may be much more than what is supported by the channel. Hence one would like to quantize the samples of image vectors. However, there are good statistical models for real world images, which say that the samples of a given image are statistically correlated. In such cases quantization of samples of a given image independently is not optimal. In such cases one typically transforms the image into another image where the samples are not correlated. In the transformed domain, the samples are quantized independently. Suppose we model the image as a zero mean random vector X with dimensions n 1 , with autocorrelation matrix given by: E [ XX H ] = R . Then we would like to obtain a linear transformation L (characterized by a matrix B) such that: Y = BX , where the autocorrelation matrix of Y is a diagonal matrix. Note that X and Y are random vectors and B is a deterministic transformation. One way to do this is to choose B = U H where R = U U H where is the diagonal matrix containing the eigen values of R and U is the matrix containing the eigen vectors of R . Note that R is always selfadjoint, i.e., R = R H and positive semidefinite. This implies that UU H = U H U = I . The autocorrelation matrix of Y is given by: E [ Y Y H ] = E [ BXX H B H ] = E [ U H XX H U ] = U H E [ XX H ] U = U H RU = U H ( U U H ) U = . In other words the samples of the transformed vector are uncorrelated and the variance of Y i , ith component of Y , is given by the ith eigen value of R . This is called KL or whitening transformation. 2. Handwritten digit recognition using Singular Value Decomposition (SVD) : Consider a linear transformation L : S T , where S = C n and T = C m . This is characterized by a matrix A of size m n . Define rank ( A ) = dim ( R ( A )) . 101 We will make a series of observation that will lead us to Singular Value Decompo sition of A . Before we start making observations, let us consider a useful theorem. Theorem: The dimension of the range space of A equals the dimension of range space of A H , i.e., dim ( R ( A )) = dim ( R ( A H )) . Proof: We will construct a linear transformation from R ( A H ) to R ( A ) that is onetoone and onto. This then implies that the dimensions must be the same. Let T : R ( A H ) R ( A ) be given by T ( v ) = A ( v ) , for every v R ( A H ) . First we show that this mapping is onetoone. This is done by showing that the null space of T is trivial. Let us proceed. Let v belongs to the range space of A H and belongs to the null space of T . Then T ( v ) = A ( v ) = 0 , and there exists a vector x such that v = A H x . These two imply that bardbl A H x bardbl 2 = x H AA H x = x H Av = x H = 0 ....
View
Full
Document
This note was uploaded on 01/10/2012 for the course EECS eecs551 taught by Professor J during the Spring '11 term at University of Michigan.
 Spring '11
 J
 Image processing

Click to edit the document details