Georgia Tech, ECE 4601
Excerpt: ... Spring 2009 EE 4601: Assignment 3 Date Assigned: February 3, 2009. Date Due: February 12, 2009. 1. Text problem 4.1 2. Text problem 4.2 3. Text problem 4.3 4. Assume that s(t) = A T t, 0, 0tT elsewhere is a known signal in the presence of additive white Gaussian noise. a) Design a matched filter for s(t). Sketch the waveform at the output of the matched filter. b) Now assume that a correlation detector is used instead. Sketch the waveform at the output of the correlation detector. 5. Consider binary signaling on an additive white Gaussian noise channel. The conditional probability density functions of the matched filter or correlator outputs are fy|1 (y|1) = fy|0 (y|0) = 1 -(y - E)2 }, exp{ 2 2w 2w 1 -(y + E)2 }, exp{ 2 2w 2w No E 2 No E 2 w = 2 2 w = Decisions are made such that we choose "1" if y > and we choose "0" if y < . In Lecture 9, we have seen that the optimum decision threshold (minimizes the bit error probability) is = 0 if P ( 1 ) = P ( 0 ) = 1/2. What is the optimum decision threshol ...
University of Illinois, Urbana Champaign, EE 467
Excerpt: ... Bsinc(2B) Proof As done in class, the proof involves showing that the process Y (t) = Nc (t) cos ct - Ns (t) sin ct is Gaussian, zero mean, and has the same ACF as NBP (t) EFFECT OF NOISE ON ANALOG COMMUNICATION SYSTEMS AWGN Channel input Hc () N(t) output PSfrag replacements The additive white Gaussian noise (AWGN) channel model accurately describes many point-to-point communication channels (e.g., the telephone line channel). The channel frequency response is represented by c V. Veeravalli, 1999 1 Hc (), and N(t) is WGN. For mobile communications channels, the channel filter is (randomly) time varying. For most cases of interest in this course, we assume that the filter is LTI and distortionless, i.e., Hc () that is roughly constant over the frequency range of the input. Signal-to-noise Ratio (SNR) A measure of fidelity of a communication system is the signal-to-noise ratio (SNR) that is defined by Useful signal power SNR = = Noise power We will use the symbol to denote SNR. SNR for Baseband Communic ...
University of Michigan, EECS 564
Excerpt: ... yes Factor vs GLRT (10 points) Compare the performance of the Bayes Factor and the GLRT for testing the Geometric hypothesis vs. the Poisson hypothesis discussed in class. In particular, consider sample sizes of 10 and 50, and means of .25, .5, 1, and 2, and generate a 4 by 2 array of subplots. Each subplot should show the ROCs of the two tests overlaid. Is one test clearly better than the other? At all times assume that the two hypotheses have the same mean. NOTE: If X Geometric(), 0 < < 1, then E[X] = (1 )/. You may nd the following Matlab commands helpful: geornd, poissrnd, hold on, subplot, logspace, gammaln, log, poisspdf, geopdf, sum. 4. Estimation via Detection (15 points) We have seen how estimation can be used to perform detection, for example in the GLRT or the MAP detector. But detection can also be used to perform estimation. Consider the problem of estimating a signal s observed in zero mean, additive white Gaussian noise with known variance. Suppose that s has a sparse repres ...
Toledo, ECE 1501
Excerpt: ... bility is p, and whose rate approaches CBSC (p). This gives us a more rened goal for coding theory. Rather than simply to protect data transmission systems against channel noise as stated earlier, we would Lecture Notes, Winter 2007 4 University of Toronto ECE 1501Error Control Codes Capacity of the BSC 1 0.8 0.6 C 0.4 0.2 0 0 0.2 0.6 0.4 Crossover probability p 0.8 1 Figure 2: Channel capacity of the binary symmetric channel. like to do so eciently, i.e., to design data transmission systems with practical decoding algorithms that come close to achieving channel capacity. The BSC may be generalized to the Q-ary symmetric channel for any Q 2. The channel inputs and outputs are drawn from a nite alphabet of Q elements, for example {0, 1, . . . , Q 1}, and the channels probability law is given by p(y|x) = 1p if y = x, p/(Q 1) if y = x. Two other channels of interest in this course are the binary erasure channel and the additive white Gaussian noise channel. For reference ...
Stanford, EE 379
Excerpt: ... ITU - Telecommunication Standardization Sector STUDY GROUP 15 Temporary Document RN-072 Original: English Redbank, New Jersey, 21 25 May 2001 Question: 4/15 SOURCE1: IBM, Globespan TITLE: G.gen: G.dmt.bis: G.lite.bis: Further results on the performance of LDPC coded modulation for AWGN channels. _ ABSTRACT We present further simulation results on the performance of LDPC coded modulation over an additive white Gaussian noise channel. Modulation types cover the range from 4-QAM up to 16384-QAM. The performance of LDPC codes of various lengths is illustrated in the spectral-efficiency versus powerefficiency plane. The results show that the proposed multilevel LDPC coding scheme exhibits uniform efficiency over all constellation sizes in terms of gap to capacity. 1 Contacts: E. Eleftheriou ele@zurich.ibm.com S. ler oel@zurich.ibm.com IBM Zurich Research Laboratory M. Sorbara M. Eyvazkhani Globespan msorbara@globespan.net meyvazkhani@globespan.net 1 ...
East Los Angeles College, OA 214
Excerpt: ... Image Processing IB Paper 8 Part A Ognjen Arandjelovi http:/mi.eng.cam.ac.uk/~oa214/ Lecture Roadmap Lecture 1: Face geometry Geometric image transformations Lecture 2: Colour and brightness enhancement Lecture 3: Denoising and image filtering Image Denoising and Filtering Image Noise Sources Image noise may be produced by several sources: Quantization Photonic Thermal Electric Denoising To effectively perform denoising, we need to consider the following issues: Signal (uncorrupted image) model Typically piece-wise constant or linear Noise model (from the physics of image formation) Additive or multiplicative, Gaussian, white, salt and pepper Salt and Pepper Noise Gaussian Noise Modelling Noise Most often noise is additive: Observed pixel luminance True luminance Noise process Additive Gaussian Noise Example A clear original image was corrupted by additive white Gaussian noise: Original, uncorrupted image Additive Gaussian ...
University of Illinois, Urbana Champaign, ECE 361
Excerpt: ... urse is to relate the statistical performance of the communication strategies to these two key resources. These three parts are discussed in the course in the context of three specic physical media: Additive white Gaussian noise channel: The received signal is the transmit signal plus a statistically independent signal. This is a basic model that underlies the more complicated wireline and wireless channels. Telephone channel: The received signal is the transmit signal passed through a timeinvariant, causal lter plus statistically independent noise. Voiceband v.90 modem and DSL technologies are used as examples. Wireless channel: The received signal is the transmit signal passed through a timevarying lter plus statistically independent noise. The GSM and 1xEV-DO standards are used as examples. Requirements: Weekly homeworks (25%), two midterms (20% each) and a nal (35%). Text and Reference Material: There is no required text book for this course. We will provide detailed lecture notes ...
University of Illinois, Urbana Champaign, ECE 359
Excerpt: ... ECE 359 Handout # 14 Spring 2003 April 1, 2003 EFFECT OF NOISE ON ANALOG COMMUNICATION SYSTEMS PSfrag replacements input AWGN Channel Hc (f ) N (t) output The additive white Gaussian noise (AWGN) channel model accurately describes many point-topoint communication channels (e.g., the telephone line channel). The channel frequency response is represented by Hc (f ), and N (t) is WGN. For mobile communications channels, the channel lter is (randomly) time varying. For most cases of interest in this course, we assume that the lter is LTI and distortionless, i.e., Hc (f ) that is roughly constant over the frequency range of the input. Signal-to-noise Ratio (SNR) A measure of delity of a communication system is the signal-tonoise ratio (SNR) that is dened by SNR = Useful signal power = Noise power We will use the symbol to denote SNR. SNR for Baseband Communications We begin by studying the SNR for baseband communication of an analog message signal m(t) with bandwidth W Hz on an AWGN channel. ...
University of Illinois, Urbana Champaign, ECE 559
Excerpt: ... PAM (pulse amplitude modulation) while orthogonal signaling represents PPM (pulse position modulation). What common modulation scheme does the signal set {k (t)} in this question represent? (b) Consider a code-base ensemble in which each distinct waveform in the form of s(t) is assigned equal probability. Assuming that there is additive white Gaussian noise and that the signals {si (t)} are chosen independently at random, show that the value of R0 in the bound E [P [E] < 2N (R0 RN ) is 1 1 A Ps k exp sin2 R0 = log2 2 A k=1 N0 A , A 3. What is the value of R0 when A = 2? Note that the dimensionality of the codebase, N , is dierent for A = 2 and A 3. (c) Discuss the relation between the values of R0 obtained in part (a) for A = 2 and A = 4 and the value of R0 obtained with binary antipodal signaling (derived in lecture and also documented in section 5.4 of the book by Wozencraft and Jacobs). 2 (d) Show that in the limit as EN N0 0, EN 1 , N0 2 ln 2 for every ...
Ill. Chicago, EECS 422
Excerpt: ... 2 2 2 u(t) : normalized current of voltage The problem of fast fading is that when the actual signal level at receiver is comparable to the noise level and SNR (signal-to-noise ratio) is below the max tolerable level. In order to understand the SNR issue, we assume that our radio channel is an additive white Gaussian noise (AWGN) channel. If the noise signal is n(t) then the actual received signal at time t is y (t ) = Au(t ) + n(t ) A : is the overall path loss and is assumed invariant with time, particularly true when the mobile moves within a small distance. n(t) is a complex phasor and is written as n(t ) = x n (t ) + jy n (t ) Then the mean noise power is 1 E [n (t )n* (t )] 1 2 2 = E [xn (t )]+ E [yn (t )] Pn = 2 2 2 Note that if xn(t) and yn(t) both have a standard deviation n (usually the case) Pn = n2 The SNR then is 2 2 A2 A2 signal power E A u (t ) 2 = SNR = = = E [u (t )] = noise power 2 Pn 2 Pn 2 Pn [ ] with the signal power out of a modulator normalized to 1. We can now in ...
Texas A&M, FTP 455
Excerpt: ... " '"1" Ifl",!"rr!, EE 455 Test # 2 Fall 2003 Given November 20, 2003. Closed-book,closed notes, one formula sheet allowed. Problems carry equal weight. 1. Consider binary signaling in an additive, white Gaussian noise channel using the two signals shownbelow. Design the optimum receiver (simplified as much as possible) and write an expression for the probability A ~ At~ , -t of error for this receiver. 1~2-~ -t Solution 1/2 !~ E = ) -z. , l\ -= ) 5. (t)~'l D -I -\" 2 = It: /2. 2/ 62. -:42 ~q -\ e -= t 2 -.e, -=L ~{' (.icY S2.(-t) -A/~ Jt l -L ~ 1/(+)\ tt)d-t -AI'f l 2. = \ A , S v (-t-)o.t -A -t 2- 7./ ~ t.f I c) :)c9p\ ~= S /,(\;-)tJ.tt ~A/Lf 1(2. I (D) '1-:;:. ~ [C; \ ~.1.:;) ~ l t >l ~ t =- A '"?-S / 2() ? ( e.,) -= e r f c. l2- GJ~q~ .2-) ;:~ ~2~_ ) f:> If c. (/K ) ) ,.u", :.~ (? C iF! r~ 1 2. You are to design a two-dimensional signal constellation having 5 signals to be used in an additive white Gaussian noise channel. Each signal's ...
University of Illinois, Urbana Champaign, ECE 559
Excerpt: ... 1 . This signal set is called the simplex signal s set. Observe that the simplex signal set has the same error probability as the orthogonal signal set {si }i over the additive white Gaussian noise channel. (b) What is the dimension of the space spanned by the simplex signal set? Sketch the simplex signal set for M = 2 and M = 3 and get a feel for the simplex signal set. (c) What is the energy of each signal in the simplex signal set? Further, what is the correlation between any two signals in this set? In other words, calculate < i , j > for all i, j = 0 . . . M 1. s s (d) Assume that a set {xi } of M vectors satises the equations: xi xj = Prove that we must have 1 i=j, i=j. 1 . M 1 Can you see an optimality property satised by the simplex signal set based on this result and the calculation in part (c)? def (e) Prove for any allowable that the signal set {i }, with si = Es xi for all i, has s the same error probability as the simplex signal set with energy 1 Es ...
Case Western, EECS 398
Excerpt: ... rbo-code Encoder Message bits z Systematic Symbols x Interleaver Parity Symbols z Turbo-code Decoder Deinterleaver Systematic Symbols MAP Decoder DEC 1 Interleaver MAP Decoder DEC 2 GND Deinterleaver Parity Symbols Decision GND Output Symbol by Symbol Maximum A Priori (MAP) Decoding ! Calculate g for each transition in trellis - proportional to probability Calculate a for each node in trellis - left to right - conditional probability ! Calculate b for each node in trellis - right to left - proportional to conditional probability ! Calculate extrinsic likelihood from b, a, g Channel Model ! ! Zero-Mean Additive White Gaussian Noise (AWGN) Given Variance s2 = 1 E 2 s N0 E s = Eb ! ! Theoretical Performance in terms of Bit Error Rate (BER) for antipodal signaling is 1 BER = erfc Eb / N 0 2 ( ) Channel Model Performance Performance of RSC Codes 361 Bit Blocks Performance of Turbo Code Turbo Code vs. RSC Questions? ...
Sanford-Brown Institute, AM 282
Excerpt: ... AM282, Section 3: Advanced Topics in Information Theory Brown University, Spring 2004 April 7, 2004 Handout #8 Midterm You have 3 hours to complete this midterm. You are allowed to use three pages (back and front) of notes. Please clearly indicate your reasoning in the space provided. Notice that the point allocation is not uniform. Name: Problem Points Earned Potential Points 1 25 2 15 3 15 4 20 5 25 Total 100 1 (1) Multiple Access Channel (25 points) (i) Consider the standard additive white Gaussian noise channel: Yi = Xi + Zi , where {Zi } is IID N (0, N ) and the input must satisfy the capacity? 1 n n i=1 (1) x2 P . What is i (ii) Now suppose that (1) is a multiple access channel, where Xi is the ith input from user 1 and Zi is the ith input from user 2. (Note: There is no noise.) User 1 1 has the same constraint as before ( n n x2 P ) and user 2 also has a power i=1 i 1 constraint, n n zi2 N . What rates (R1 , R2 ) are achievable? i=1 (Continued on next page.) 2 (iii) Now consider the mu ...
Caltech, EE 32
Excerpt: ... EE32b Signals, Systems, and Transforms March 8, 2002 Homework Assignment 8 Due (in class) 9am Monday March 11, 2002 Reading: OW2, Chapter 11, Sections 11.0, 11.1, 11.2.3, 11.4 R. J. McEliece 162 Moore Problems to Hand In: Problem 1. In the "BPSK" lecture of March 4, I discussed a BPSK modulated signal of the form 2Eb cos(c t), for nTs t < (n + 1)Ts , y(t) = (-1)x[n] Ts where x[n] is a binary (i.e., x[n] = 0 or 1) discrete-time signal, Eb is the energy per bit (in joules), and Ts is the time needed to transmit one bit (in seconds). Then I suggested using a "matched filter" demodulator, which computes (n+1)Ts (1) where (2) I then showed that Qn = nTs y(t)s(t)dt, s(t) = 2 cos(c t). Ts Eb , Qn = (-1)x[n] so that a good decision rule is (3) x[n] = 0 if Qn 0 1 if Qn < 0. If now the received signal is r(t) = y(t) + n(t), where n(t) is an additive white gaussian noise process with noise spectral density N0 /2 watts/Hz, and Qn is defined by (n+1)Ts (4) Qn = nTs r(t)s(t)dt, where s(t) is defined in (2), ...
University of Illinois, Urbana Champaign, ECE 459
Excerpt: ... as Y = AX + b. Find A and b such that Y is zero mean with covariance matrix I. (Hint: Diagonalize .) 2. (15 pts total) Bounds on the Q function. Q(x) = x e-t /2 dt 2 2 (a) (8 pts) For x > 0 show that the following upper and lower bounds hold for the Q function: 1 1- 2 x e-x /2 e-x /2 Q(x) x 2 x 2 2 2 2 Hint: For the upper bound, write the integrand as a product of 1/t and te-t /2 , use integration by parts, and bound. For the lower bound, integrate by parts once more and bound. c V. V. Veeravalli, 2000 1 (b) (7 pts) As you know from your undergraduate communications course, the bit error probability for BPSK signaling in additive white Gaussian noise (AWGN) with PSD N0 /2 is given by: 2Eb Pe = Q N0 where Eb is the bit energy. Plot the error probabililty Pe (on a log scale) versus signal-to-noise ratio Eb /N0 (in dB) using Matlab or Mathematica. (You may need to use an appropriately modified version of the error function in these packages.) Consider Eb /N0 ranging from -5 dB to 15 dB. Also plot th ...
East Los Angeles College, EL 334
Excerpt: ... bols is additionally impeded by additive white Gaussian noise (AWGN); it severeness is described by the signal-to-noise ratio (SNR) Therefore, dependent on the above parameters, we are interested in determining the maximum possible error-free information transmission (channel capacity C) We will see that C depends on B and SNR 46 ELEC3028 Digital Transmission Overview & Information Theory S Chen Characteristics of Channel The channel can be described by its impulse response h(t) or equivalently its frequency response H(j) = A() ej() with amplitude response A() and phase response (); h(t) and H(j) are Fourier pair Ideal channel (pure delay): h(t) = (t T ) A() 1 1 0 B 0 T A() = 1, () = T B () Flat magnitude and linear phase ( = constant group delay G() = ()/) The only impairment caused by an ideal channel is AWGN Non-ideal channel: channel is dispersive, causing intersymbol interference 47 ELEC3028 D ...
University of Illinois, Urbana Champaign, ECE 559
Excerpt: ... answered twice (both in class and at home) then the minimum of the two scores will be chosen. 6. The maximum score for any question is somewhat indicative of the fraction of the total time you might want to spend on that question, but do not be misled by that. The questions are posed in a way that they can be answered easily if seen in the correct light. 1 1. (7 points) It is known that the ML error probability is q when the two signal vectors s0 and s1 shown in Fig (a) below are transmitted over a channel disturbed by additive white Gaussian noise. Compute the ML error probability in terms of q, , l when the nine vectors indicated by s in Fig (b) are used as signals over the same channel. 2 2. (16 points total) A transmitted symbol X, equally likely to be 1, is received at n antennas: yi = x + zi , i = 1 . . . n. Here the noise (z1 , . . . , zn ) is independent of the transmitted symbol and is zero mean, jointly Gaussian. (a) (3 points) Suppose the covariance matrix of the jointly Gaussian ran ...