{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Hw4_001

# Hw4_001 - ECE 5670 Digital Communications(Spring 2011...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECE 5670 : Digital Communications (Spring 2011) Homework 4 Due February 24 in class Instructor: Salman Avestimehr Oﬃce 325 Rhodes Hall 1. Recall that in class it was shown that for a binary-PAM constellation, reliable com- munication was possible at all rates less than R ∗ = 1 − log 2 1 + e − E 2 σ 2 . In this problem we generalize this result for an M-PAM constellation. Consider the channel ~y = ~x + ~w where ~w is a T × 1 noise vector whose entries are i.i.d. Gaussian random variables with zero mean and variance σ 2 . The vector ~x is a T × 1 vector representing the codeword that is transmitted. There are 2 RT possible codewords to transmit which we label ~v 1 , ~v 2 , . . . , ~v 2 RT . These codewords are generated as follows. Let C be a random generator matrix with T rows and RT log 2 M columns. It has RT 2 log 2 M entries and these are i.i.d. random variables taking on values in the set { 1 , 2 , . . . , M } with equal probability 1 /M . The data vector is denoted by B . It has dimension RT log 2 M × 1. There are M RT / log 2 M = 2 RT possible data vectors B 1 , B 2 , . . . , B 2 RT , representing all possible combinations of elements from the set { , 1 , . . . , M − 1 } . Thus R ≤ log 2 M . For example the first data vector is B 1 = [0 , , . . . , 0] t and the last one is B 2 RT = [ M − 1 , M − 1 , . . . , M − 1] t . The codeword ~v i is given by ~v i = √ E 2 M − 1 ( CB i mod M ) − 1 In other words, we map the M-ary valued entries of the data vector into an M-PAM constellation with maximum amplitude √ E . For example if M = 4, and CB i = [0 , 1 , 2 , 3] t , then we transmit ~v i = − √ E − √ E/ 3 + √ E/ 3 + √ E (a) Using the union bound, show that the probability that the ML decoder makes an error satisfies Pr( E ) ≤ 2 RT X i =1 2 RT X j =1 ,j 6 = i Pr( ~v i → ~v j | ~x = ~v i ) Pr( ~x = ~v i ) where Pr( ~v i → ~v j | ~x = ~v i ) denotes the probability that the received vector lies...
View Full Document

{[ snackBarMessage ]}

### Page1 / 4

Hw4_001 - ECE 5670 Digital Communications(Spring 2011...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online