This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 10 1. (a) ( AA H ) T = ( A H ) T .A T = ( A T ) T A T = AA H ∴ ( AA H ) H = AA H For AA H , λ = λ , i.e. eigenvalues are real AA H = Q Λ Q H (b) X H AA H X = ( X H A )( X H A ) H = k X H A k ≥ ∴ AA H is positive semidefinite. (c) I M + AA H = I M + Q Λ Q H = Q ( I + Λ) Q H A H positive semidefinite ⇒ λ i ≥ ∀ i ∴ 1 + λ i > ∀ i ∴ I M + AA H positive definite (d) det[ I M + AA H ] = det[ I M + Q Λ Q H ] = det[ Q ( I M + Λ M ) Q H ] = det[ I M + Λ M ] = Π Rank ( A ) i =1 (1 + λ i ) det[ I N + A H A ] = det[ I N + e Q Λ e Q H ] = det[ e Q ( I N + Λ N ) e Q H ] = det[ I N + Λ N ] = Π Rank ( A ) i =1 (1 + λ i ) ∵ AA H and A H A have the same eigenvalue ∴ det[ I M + AA H ] = det[ I N + A H A ] 2. H = U Σ V T U =  . 4793 . 8685 . 1298 . 5896 . 4272 . 6855 . 6508 . 2513 . 7164 Σ = 1 . 7034 0 0 . 7152 0 0 . 1302 V =  . 3458 . 6849 . 4263 . 5708 . 2191 . 0708 . 7116 . 6109 . 0145 . 2198 . 3311 . 9017 3. H = U Σ V T Let U = 1 0 0 1 0 0 V = 1 0 0 1 0 0 Σ = • 1 0 0 2 ‚ ∴ H = 1 0 0 0 2 0 0 0 0 4. Check the rank of each matrix rank( H I ) = 3 ∴ multiplexing gain = 3 rank( H 2 ) = 4 ∴ multiplexing gain = 4 5. C = R H X i =1 log 2 1 + λ i ρ M t ¶ Constraint ∑ V i = ρ ∑ λ i = constant ∴ ∂C ∂λ i = ρ M t ln2 1 (1 + λ i ρ M t ) ρ M t ln2 1 (1 + λ i ρ M t ) = 0 ⇒ λ i = λ j ∴ when all R H singular values are equal, this capacity is maximized. 6. (a) Any method to show H ≈ U ΛV is acceptable. For example: D = . 13 . 08 . 11 . 05 . 09 . 14 . 23 . 13 . 10 where : d ij = fl fl fl H ij H H ij fl fl fl × 100 (b) precoding filter M = V 1 shaping filter F = U 1 F =  . 5195 . 3460 . 7813 . 0251 . 9078 . 4188 . 8540 . 2373 . 4629 M =  . 2407 . 8894 . 3887 . 4727 . 2423 . 8473 . 8478 . 3876 . 3622 Thus Y = F(H)M X + F N = U * U ΛV V * X + U * N = Λ X + U * N (c) P i P = 1 γ o 1 γ i for 1 γ i > 1 γ o , 0 else γ i = λ i 2 P N o B = 94.5 for i = 1, 6.86 for i = 2, .68 for i = 3 Assume γ 2 > γ > γ 3 since γ 3 = .68 is clearly too small for data transmission ∑ P i P = 1 ⇒ 2 γ 1 γ 1 1 γ 2 = 1 ⇒ γ = 1 . 73 P 1 P = . 5676 P 2 P = . 4324 C = B £ log 2 ( 1 + γ 1 P 1 P ) + log 2 ( 1 + γ 2 P 2 P )/ = 775.9 kbps (d) With equal weight beamforming, the beamforming vector is given by c = 1 √ (3) [1 1 1]. The SNR is then given by: SNR = c H H H Hc N B = ( . 78)(100) = 78 . (1) This gives a capacity of 630.35 kbps. The SNR achieved with beamforming is smaller than the best channel in part (c). If we had chosen c to equal the eigenvector corresponding to the best eigenvalue, then the SNR with beamforming would be equal to the largest SNR in part(c). The beamforming SNR for the given c is greater than the two smallest eigenvalues in part(c) because the channel matrix has one large eigenvalue and two very small eigenvalues.the channel matrix has one large eigenvalue and two very small eigenvalues....
View
Full Document
 Fall '08
 Goldsmith
 Singular value decomposition

Click to edit the document details