Chapter 10

Chapter 10 - Chapter 10 1. (a) ( AA H ) T = ( A H ) T .A T...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 10 1. (a) ( AA H ) T = ( A H ) T .A T = ( A T ) T A T = AA H ( AA H ) H = AA H For AA H , = , i.e. eigen-values are real AA H = Q Q H (b) X H AA H X = ( X H A )( X H A ) H = k X H A k AA H is positive semidefinite. (c) I M + AA H = I M + Q Q H = Q ( I + ) Q H A H positive semidefinite i i 1 + i > i I M + AA H positive definite (d) det[ I M + AA H ] = det[ I M + Q Q H ] = det[ Q ( I M + M ) Q H ] = det[ I M + M ] = Rank ( A ) i =1 (1 + i ) det[ I N + A H A ] = det[ I N + e Q e Q H ] = det[ e Q ( I N + N ) e Q H ] = det[ I N + N ] = Rank ( A ) i =1 (1 + i ) AA H and A H A have the same eigen-value det[ I M + AA H ] = det[ I N + A H A ] 2. H = U V T U = - . 4793 . 8685- . 1298- . 5896- . 4272- . 6855- . 6508- . 2513 . 7164 = 1 . 7034 0 0 . 7152 0 0 . 1302 V = - . 3458 . 6849 . 4263- . 5708 . 2191 . 0708- . 7116- . 6109 . 0145- . 2198 . 3311- . 9017 3. H = U V T Let U = 1 0 0 1 0 0 V = 1 0 0 1 0 0 = 1 0 0 2 H = 1 0 0 0 2 0 0 0 0 4. Check the rank of each matrix rank( H I ) = 3 multiplexing gain = 3 rank( H 2 ) = 4 multiplexing gain = 4 5. C = R H X i =1 log 2 1 + i M t Constraint V i = i = constant C i = M t ln2 1 (1 + i M t )- M t ln2 1 (1 + i M t ) = 0 i = j when all R H singular values are equal, this capacity is maximized. 6. (a) Any method to show H U V is acceptable. For example: D = . 13 . 08 . 11 . 05 . 09 . 14 . 23 . 13 . 10 where : d ij = fl fl fl H ij- H H ij fl fl fl 100 (b) precoding filter M = V- 1 shaping filter F = U- 1 F = - . 5195- . 3460- . 7813- . 0251- . 9078 . 4188- . 8540 . 2373 . 4629 M = - . 2407- . 8894 . 3887- . 4727- . 2423- . 8473- . 8478 . 3876 . 3622 Thus Y = F(H)M X + F N = U * U V V * X + U * N = X + U * N (c) P i P = 1 o- 1 i for 1 i > 1 o , 0 else i = i 2 P N o B = 94.5 for i = 1, 6.86 for i = 2, .68 for i = 3 Assume 2 > > 3 since 3 = .68 is clearly too small for data transmission P i P = 1 2 - 1 1- 1 2 = 1 = 1 . 73 P 1 P = . 5676 P 2 P = . 4324 C = B log 2 ( 1 + 1 P 1 P ) + log 2 ( 1 + 2 P 2 P )/ = 775.9 kbps (d) With equal weight beamforming, the beamforming vector is given by c = 1 (3) [1 1 1]. The SNR is then given by: SNR = c H H H Hc N B = ( . 78)(100) = 78 . (1) This gives a capacity of 630.35 kbps. The SNR achieved with beamforming is smaller than the best channel in part (c). If we had chosen c to equal the eigenvector corresponding to the best eigenvalue, then the SNR with beamforming would be equal to the largest SNR in part(c). The beamforming SNR for the given c is greater than the two smallest eigenvalues in part(c) because the channel matrix has one large eigenvalue and two very small eigenvalues.the channel matrix has one large eigenvalue and two very small eigenvalues....
View Full Document

This note was uploaded on 01/11/2012 for the course EE 359 taught by Professor Goldsmith during the Fall '08 term at Stanford.

Page1 / 15

Chapter 10 - Chapter 10 1. (a) ( AA H ) T = ( A H ) T .A T...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online