5_HW5Solutions

5_HW5Solutions - EE 211A Fall Quarter, 2011 Instructor:...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE 211A Fall Quarter, 2011 Instructor: John Villasenor Digital Image Processing I Handout 18 Homework 5 Solutions Solutions: 1. a. ⎤ྏ ⎛ྎ −2 ⎞ྏ 1 ⎡ྎ⎛ྎ 2 ⎞ྏ *T u k = ⎢ྎ⎜ྎ ⎟ྏ ( 2 −1) + ⎜ྎ ⎟ྏ ( −2 0 )⎥ྏ 2 ⎣ྏ⎝ྎ −1⎠ྏ k =0 ⎝ྎ 0 ⎠ྏ ⎦ྏ ⎡ྎ 4 − 1⎤ྏ 1 ⎥ྏ. = ⎢ྎ ⎢ྎ− 1 2 ⎥ྏ ⎣ྏ ⎦ྏ Ru = 1 M M −1 ∑u k 9 b. | R u − λ I | = 0 = λ2 − λ + 1 ⇒ λ = 4.266, 0.234. 2 To find φ 0 , ⎡ྎ 4 − 1⎤ྏ ⎡ྎ 1 ⎤ྏ ⎡ྎ 4 − C ⎤ྏ ⎡ྎ 4.266 ⎤ྏ 1 ⎥ྏ ⎢ྎ ⎥ྏ = ⎢ྎ C ⎥ྏ = ⎢ྎ ⎢ྎ ⎥ྏ ⎢ྎ− 1 2 ⎥ྏ ⎣ྏC ⎦ྏ ⎢ྎ− 1 + 2 ⎥ྏ ⎣ྏ4.266C ⎦ྏ ⎣ྏ ⎦ྏ ⎣ྏ ⎦ྏ ⎡ྎ 0.966 ⎤ྏ ⇒ C = −0.266 ⇒ normalized φ 0 = ⎢ྎ ⎥ྏ ⎣ྏ− 0.257⎦ྏ ⎡ྎ0.257⎤ྏ φ 1 = ⎢ྎ ⎥ྏ ⎣ྏ0.966⎦ྏ c. φ *T ⎡ྎφ *T ⎤ྏ ⎡ྎ0.966 − 0.257⎤ྏ 0 = ⎢ྎ *T ⎥ྏ = ⎢ྎ ⎥ྏ. ⎢ྎφ 1 ⎥ྏ ⎣ྏ0.257 0.966 ⎦ྏ ⎣ྏ ⎦ྏ d. ⎡ྎ 2.189 ⎤ྏ ⎡ྎ − 1.932 ⎤ྏ *T *T v 0 = φ u 0 = ⎢ྎ ⎥ྏ ⋅ v1 = φ u 1 = ⎢ྎ− 0.514⎥ྏ. ⎣ྏ− 0.452⎦ྏ ⎣ྏ ⎦ྏ ⎛ྎ 0.966 ⎞ྏ ⎛ྎ 0.257 ⎞ྏ ⎛ྎ 2 ⎞ྏ v0 (0)φ 0 + v0 (1)φ 1 = 2.189⎜ྎ ⎜ྎ − 0.257 ⎟ྏ − 0.452⎜ྎ 0.966 ⎟ྏ = ⎜ྎ − 1⎟ྏ = u 0 . ⎟ྏ ⎜ྎ ⎟ྏ ⎜ྎ ⎟ྏ ⎝ྎ ⎠ྏ ⎝ྎ ⎠ྏ ⎝ྎ ⎠ྏ e. Rv = 1 *T *T [v 0 v 0 + v 1 v 1 ] 2 1 − 0.989 ⎞ྏ ⎛ྎ 3.73 0.99 ⎞ྏ⎤ྏ ⎡ྎ4.26 0 ⎤ྏ 1 ⎡ྎ⎛ྎ 4.79 ⎢ྎ⎜ྎ ⎜ྎ − 0.989 0.204 ⎟ྏ + ⎜ྎ 0.99 0.264 ⎟ྏ⎥ྏ = ⎢ྎ 0 ⎟ྏ ⎜ྎ ⎟ྏ 0.234⎥ྏ 2 ⎣ྏ⎝ྎ ⎠ྏ ⎝ྎ ⎠ྏ⎦ྏ ⎣ྏ ⎦ྏ = diag (λk ) . = f. The unitary DFT matrix for N = 2 is 1 ⎡ྎ1 1 ⎤ྏ ⎢ྎ ⎥ྏ. 2 ⎣ྏ1 − 1⎦ྏ ⇒ w0 w1 g. = = Rw = 1 ⎡ྎ1 1 ⎤ྏ ⎡ྎ 2 ⎤ྏ ⎢ྎ ⎥ྏ ⎢ྎ ⎥ྏ 2 ⎣ྏ1 − 1⎦ྏ ⎣ྏ− 1⎦ྏ 1 ⎡ྎ1 1 ⎤ྏ ⎡ྎ− 2⎤ྏ ⎢ྎ ⎥ྏ ⎢ྎ ⎥ྏ 2 ⎣ྏ1 − 1⎦ྏ ⎣ྏ 0 ⎦ྏ 1 ⎡ྎ⎛ྎ 0.5 1.5 ⎞ྏ ⎛ྎ 2 2 ⎞ྏ⎤ྏ ⎟ྏ + ⎜ྎ ⎟ྏ⎥ྏ ⎢ྎ⎜ྎ 2 ⎣ྏ⎜ྎ 1.5 4.49 ⎟ྏ ⎜ྎ 2 2 ⎟ྏ⎦ྏ ⎝ྎ ⎠ྏ ⎝ྎ ⎠ྏ ⎛ྎ 0.707 ⎞ྏ = ⎜ྎ ⎜ྎ 2.12 ⎟ྏ. ⎟ྏ ⎝ྎ ⎠ྏ ⎛ྎ − 1.414 ⎞ྏ = ⎜ྎ ⎜ྎ − 1.414 ⎟ྏ. ⎟ྏ ⎝ྎ ⎠ྏ ⎛ྎ1.25 1.75 ⎞ྏ = ⎜ྎ ⎜ྎ1.75 3.245 ⎟ྏ. ⎟ྏ ⎝ྎ ⎠ྏ The KL transform coefficients were completely decorrelated as shown by the diagonal nature of R v . By contrast, R w is not diagonal. ⎛ྎ 0 ⎞ྏ 2. The given random process has mean m u = ⎜ྎ 1 ⎟ྏ . Subtracting m u from each vector ⎜ྎ − ⎟ྏ ⎜ྎ ⎟ྏ ⎝ྎ 2 ⎠ྏ gives ⎛ྎ 2 ⎞ྏ ⎛ྎ −2 ⎞ྏ ⎜ྎ ⎟ྏ , u = ⎜ྎ ⎟ྏ . % %1 u0 = 1 ⎜ྎ − 1 ⎟ྏ ⎜ྎ ⎟ྏ ⎜ྎ ⎟ྏ ⎜ྎ ⎟ྏ ⎝ྎ 2 ⎠ྏ ⎝ྎ 2 ⎠ྏ ⎡ྎ⎛ྎ 2 ⎞ྏ ⎤ྏ ⎡ྎ 4 −1⎤ྏ ⎛ྎ −2 ⎞ྏ 1 ⎢ྎ⎜ྎ ⎟ྏ ⎛ྎ 2 − 1 ⎞ྏ + ⎜ྎ ⎟ྏ ⎛ྎ −2 1 ⎞ྏ ⎥ྏ = ⎢ྎ R u% = 1 ⎥ྏ . ⎟ྏ ⎜ྎ 1 ⎟ྏ ⎜ྎ ⎟ྏ ⎥ྏ ⎢ྎ ⎢ྎ⎜ྎ − 1 ⎟ྏ ⎜ྎ ⎥ྏ 2 ⎜ྎ 2 ⎠ྏ ⎜ྎ ⎟ྏ ⎝ྎ 2 ⎠ྏ −1 ⎟ྏ ⎝ྎ ⎢ྎ⎝ྎ 2 ⎠ྏ ⎥ྏ ⎣ྏ ⎝ྎ 2 ⎠ྏ 4 ⎦ྏ ⎣ྏ ⎦ྏ ⎛ྎ 0.970 ⎞ྏ ⎟ྏ. The vector normal to this is ⎟ྏ ⎝ྎ − 0.243⎠ྏ λ0 = 4.25, λ1 = 0. Eigenvector for λ0 is ⎜ྎ ⎜ྎ ⎛ྎ 0.243 ⎞ྏ ⎜ྎ ⎜ྎ 0.970 ⎟ྏ. The KL transform for this process can be written ⎟ྏ ⎝ྎ ⎠ྏ ⎛ྎ 0.970 − 0.243⎞ྏ ⎟ྏ. 0.970 ⎟ྏ ⎠ྏ φ *T = ⎜ྎ ⎜ྎ 0.243 ⎝ྎ 2 ⎛ྎ 2.0616 ⎞ྏ ⎛ྎ − 2.0616 ⎞ྏ ⎡ྎ4.25 0⎤ྏ *T ⎟ྏ, R v = ⎢ྎ v 0 = φ u 0 = ⎜ྎ . ⎜ྎ 0 ⎟ྏ, v1 = ⎜ྎ ⎟ྏ ⎜ྎ ⎟ྏ 0 0⎥ྏ ⎝ྎ ⎠ྏ ⎝ྎ ⎠ྏ ⎣ྏ 0 ⎦ྏ In this particular case, the transformed vectors occupy only 1 dimension because u 0 and u 1 are collinear. As a result, all the energy in the KL transform appears in the first coefficient. Accordingly, one of the eigenvalues is 0. This is not true in general; it is only true for zero-mean “processes” containing 2 vectors. 3. The sum to be computed is ∑ ∑ ∑ ∑ a(k , m)a(l, n)r(m, n, mʹȃ, nʹȃ)a(k , mʹȃ)a(l, nʹȃ). m n mʹȃ nʹȃ There are a total of 16 terms in the summation for N = 2. Note that for 1 ⎡ྎ1 1 ⎤ྏ N = 2, a(k , m) = ⎢ྎ ⎥ྏ . Define f (⋅) as 2 ⎣ྏ1 −1⎦ྏ 1 1 f (k , l , m, n, m ', n ') = a(k , m)a(l , n)a(k , mʹȃ)a(l , nʹȃ) = or − 4 4 and considering r (⋅) for the separable and non-separable cases, we can make the following table: m n mʹȃ n ʹȃ 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Separable Covariance p |m−mʹȃ|+|n−nʹȃ| Isotropic Covariance p 1 p p p2 p 1 p2 p p p2 p 1 p2 1 p p2 1 p p2 p p 1 p2 p p 1 p2 p p 3 k=0 l=0 k=0 l=1 k=1 l=0 k=1 l=1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ( m− mʹȃ ) 2 +( n − nʹȃ ) 2 1 p p 4 f (⋅) 1 -1 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1 1 1 -1 -1 1 1 -1 -1 -1 -1 1 1 -1 -1 1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1 1 -1 -1 1 2 Because σ k2,l = ∑∑∑∑ f (k , l , m, n, m ', n ')r (m, n, mʹȃ, nʹȃ) , σ 0,0 for the separable case, m n mʹȃ nʹȃ for example, can be calculated by 1) multiplying elements of 2nd column ( r (⋅) for the separable case) by corresponding elements of 4th column (k=0, l=0), 2) adding them up, and 3) dividing the sum by 4 (due to 4 f (⋅) ) as σ 2 0,0 (1 + p + p + p = 2 + p + 1 + p 2 + p + p + p 2 + 1 + p + p 2 + p + p + 1) 4 = (1 + p ) 2 Summing for the 8 cases gives: σ k2,l k0 Separable Covariance l → 0 1 (1 + p) 2 Isotropic Covariance (1 + p)(1 − p) 1+ 2 p + p (1 − p) 2 1− p 1 (1 + p)(1 − p) 2 2 1− p 2 1− 2 p + p 2 p = 0.95 3.80 0.0975 0.0975 0.0025 3.83 0.07 0.07 0.03 p = 0.7 2.89 0.51 0.51 0.09 3.00 0.396 0.396 0.204 Use of a separable covariance model will tend to underestimate the energy in the diagonal DCT elements k ≈ l , and to overestimate energy along axis. 4. The total energy in u (m, n) is 16, thus the transform coefficient variances must satisfy 3 3 ∑ ∑σ 2 k ,l = 16. k =0 l =0 Array A and D fail this test, and are therefore not valid σ k2,l for a u (m, n) with unit variance. This leaves arrays B and C. Since we know a separable model tends to underestimate variance along the diagonal, then ⇒ B is from the separable covariance model 2 2 (since σ 33 | B < σ 33 |C ) ⇒ C is from the isotropic covariance model. 4 Part b 2 This is most easily solved using the σ 00 for the separable model. For an N = 4 1D 2 sequence, σ 02 is 10.233 = 3.199. Noting that σ 0 = Rv (0,0), and Rv = ARu AT . ⎡ྎ 1 p p 2 ⎢ྎ ⎡ྎ0.5 0.5 0.5 0.5⎤ྏ ⎢ྎ p Rv = ⎢ྎ ⎥ྏ ⎢ྎ 2 ⎣ྏ ⎦ྏ p ⎢ྎ 3 ⎢ྎ p ⎣ྏ p 3 ⎤ྏ ⎡ྎ0.5 ⎥ྏ ⎢ྎ ⎥ྏ ⎢ྎ0.5 ⎥ྏ ⎢ྎ0.5 ⎥ྏ ⎢ྎ ⎥ྏ ⎣ྏ0.5 ⎦ྏ 2 ⇒ Rv (0,0) = σ 0 = 1 + 1.5 p + p 2 + 0.5 p 3 = 3.199 . p σ 02 0.8 3.096 0.85 3.304 0.82 3.17 ⇒ 0.82 is very close to actual p (True p is 0.825) 5 ⎤ྏ ⎥ྏ ⎥ྏ ⎥ྏ ⎥ྏ ⎦ྏ ...
View Full Document

This note was uploaded on 12/27/2011 for the course EE211A 211A taught by Professor Villasenor during the Fall '11 term at UCLA.

Ask a homework question - tutors are online