# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

181 Pages

### EECS 555 and 452

Course: EECS 555, Spring 2012
School: Michigan
Rating:

Word Count: 9279

#### Document Preview

555: $' EECS Digital Communications Theory Winter 2009 Instructor: Prof. Wayne Stark Course Time: Tuesday and Thursday: 10:40-12:00 Ofce Hours: Monday and Wednesday: 11:00-12:00 or by appointment. Ofce: 4242 EECS Course Notes: Available on line E-mail: stark@eecs.umich.edu Copyright c Wayne E. Stark, 2009 &amp; % I-1$ ' Grading Grading will be based on homework, quizzes, a midterm exam, and a...

Register Now

#### Unformatted Document Excerpt

Coursehero >> Michigan >> Michigan >> EECS 555

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
555: $' EECS Digital Communications Theory Winter 2009 Instructor: Prof. Wayne Stark Course Time: Tuesday and Thursday: 10:40-12:00 Ofce Hours: Monday and Wednesday: 11:00-12:00 or by appointment. Ofce: 4242 EECS Course Notes: Available on line E-mail: stark@eecs.umich.edu Copyright c Wayne E. Stark, 2009 & % I-1$ ' Grading Grading will be based on homework, quizzes, a midterm exam, and a project. Homework Quizzes 15% In class participation 10 % Midterm Exam 30 % Project 30 % Total & 15% 100 % % I-2 $' Course Goals Obtain an understanding of the fundamental tradeoffs in the performance of a communication system. Be able to analyze the performance of a given communication system Be able to determine the optimal receiver for a communication system and channel characteristic. Be able to design a communication system with low complexity but near optimal performance. For a few standardized communication system obtain an understanding of why the design choices were made. & % I-3 ' Lecture 1: Wireless Communication Systems$ There are a number of different wireless communication systems. These include the following. Analog Cellular Analog Cordless Phones Paging Digital Cordless Phones Digital Cellular Packet Radio Wireless Local Area Networks Low Earth Orbit Satellites & % I-4 $' Analog Cellular The analog cellular systems are in widespread use. The different frequency bands are shown below for different countries. From [1]. All of these systems used FM (frequency modulation) with FDMA (frequency division multiple access). & % I-5$ ' Analog Cellular Systems Analog Cellular (Speech) Standard Frequencies Channel Number of Mobile/Base Spacing Channels AMPS 824-849/869-894 30kHz 832 US TACS 890-915/935-960 25kHz 1000 Europe ETACS 872-905/917-950 25kHz 1240 United Kingdom NMT 450 453-457.5/463-467.5 25kHz 180 Europe NMT 900 890-915/935-960 12.5kHz 1999 Europe C-450 450-455.74/460-465.74 10kHz 573 Germany Portugal RTMS 450-455/460-465 25kHz 200 Italy Radiocom 2000 192.5-199.5/200.5-207.5 12.5 560 France 215.5-233.5/207.5-215.5 640 162.5-168.4/169.8-173 256 414.8-418/424.8-428 Region 256 NTT 25 600 Japan JTACS/NTACS & 925-940/870-885 915-925/860-870 25 400 Japan % I-6 $' Digital Cellular Standard IS-54 IS-95 GSM JDC Downlink (MHz) 869-894 869-894 935-960 810-826 Uplink (MHz) 824-849 824-849 890-915 940-956 Country U.S.A. U.S.A. Europe Japan Multiple-Access TDMA/FDMA CDMA/FDMA TDMA/FDMA TDMA/FDMA Data Rate 8 1.2-9.6 13 8 RF Channel Spacing 30kHz 1.25MHz 200kHz 25kHz Modulation /4 DQPSK BPSK GMSK /4 DQPSK Coding Convolutional Convolutional Convolutional Convolutional CRC Orthogonal CRC CRC Channel Rate 48.6kbps 1.2288Mcps 270.833kbps 42kbps Frame Duration 40ms 20ms 4.615ms 29ms Power 600mW 600mW 1W Max/Avg. 200mW Frequencies 125mW From [2, 3]. & % I-7$ ' Personal Communications Systems (PCS) Frequency Band Designation Autction Type Bandwidth Auction Date 1850-1865MHz A MTA 15MHz 12/6/94-3/13/95 1865-1870MHz D BTA 5MHz 1870-1885MHz B MTA 15MHz 1885-1890MHz E BTA 5MHz 1890-1995MHz F BTA 5MHz 1995-1910MHz C MTA 15MHz 1910-1920MHz Unlicensed MTA 10MHz MTA 15MHz 12/6/94-3/13/95 8/29/95 Data 1920-1930MHz Unlicensed Voice 1930-1945MHz A MTA 15MHz 1945-1950MHz D BTA 5MHz 1950-1965MHz B MTA 15MHz 1965-1970MHz E BTA 5MHz 1970-1975MHz F BTA 5MHz 1975-1990MHz C MTA 15MHz 12/6/94-3/13/95 12/6/94-3/13/95 8/29/95 MTA: Major Trading Area (51). BTA: Basic Trading Area (493) & % I-8 ' Auctions for Frequencies $The auction for the A and B bands generated$7,736,020,384 . WirelessCo, L.P., a partnership among Sprint, Tele-Communications, Inc., Cox Cable, and Comcast Telephony, placed high bids totaling $2,110,079,168 in 29 markets. AT&T Wireless PCS Inc. was the high bidder in 21 markets with$1,684,418,000 in bids. The FCC requires broadband PCS licensees to make their services available to one-third of the population in their service area within ve years and to two-thirds within 10 years. New auction (Auction 73) for 700 MHz spectrum to begin January 24th, 2008. & % I-9 $' Auctions for Frequencies Below is a sample of the information provide on the world wide web concerning the auction. For further information see the FCC home page on the internet (http://www.fcc.gov). & % I-10$ ' Market Frequency Round Date Time Block Number B321 C 5 $300000000 2224 1/5/96 12:58:51 B184 C 5$6461552 2358 1/5/96 10:39:10 B007 C 5 $5770000 2326 1/5/96 10:08:51 B318 C 5$5492000 2086 1/5/96 12:25:53 B438 C 5 $3955701 2010 1/5/96 10:06:42 B010 C 5$2550011 2187 1/5/96 10:18:40 B412 C 5 $2442276 2146 1/5/96 10:30:27 B361 C 5$1413361 2238 1/5/96 10:13:13 B063 C 5 $963103 2290 1/5/96 10:37:42 B319 C 5$1292000 2086 1/5/96 12:25:53 & Bid Bidder Amount Number % I-11 $' Personal Communications Systems (PCS) Standards System IS-136 PACS derivative IS-95 W-CDMA Multiple- TDMA TDMA Access FDMA DS-CDMA DECT Omnipoint TDMA TDMA FDMA derivative GSM FDMA derivative DS-CDMA TDMA FDMA CDMA Data Rate 8kbps 32kbps 8/13.3kbps 32kbps 13kbps 32kbps 8/32 kbps Bandwidth 30kHz 300kHz 1.25MHz 5 MHz 200kHz 1.728MHz 5MHz Modulation /4 DQPSK /4 DQPSK BPSK QPSK GMSK GFSK QCPM Coding FEC Error Det. FEC FEC FEC None None Ave Power 200mW 25mW 200mW 200mW 125mW 20.8mW 10mW Peak Power 600mW 200mW 200mW 200mW 1W 250mW 1W Frame Dur. 20ms 2.5ms 20ms - 4.62ms 10ms 20ms Slot Dur. 6.7ms 0.3125ms - - 0.58ms 0.416ms 0.625ms & % I-12$ ' IMT-2000, Cellular 3G Cellular, Paging Indoor edestrian Vehicular P Mobility/Cellsize Trends in Wirelss Communications Generation Cellular 802.11b 802.11g Ultrawidband Bluetooth LAN 10k & W-CDMA 4-th 100k 1M 10M 100M Data Rate (bps) % I-13 ' Industrial Scientic and Medical(ISM) Bands $Frequencies: 902-928 MHz, 2400-2483 MHz, 5725-5850MHz There are no standards here. There are many systems currently available. The FCC requires the use of spread-spectrum communications so as to minimize the interference among users. Users are limited to 1 Watt transmission and must spread the bandwidth by a factor of 10 or more. The power radiated outside the band must be at least 20dB below the maximum power density within the band. There are many systems designed for the 902-928MHz band mostly using direct-sequence spreading. The systems for the 2.4GHz band mostly use & % I-14 '$ frequency hopped spreading. The data rates vary from around 10 kbps to 1.5 Mbps. & % I-15 $' Other wireless systems CDPD: A overlay of existing cellular systems. Will share base stations with cellular. Modulation: GMSK. Coding: Reed-Solomon. Data Rate 19.2kbps. Currently being deployed. ARDIS (IBM/Motorola, 1983) Frequency: 800MHz. Data rate 4.8-19.2kbps. Modulation: GMSK. Power: 40W Base 4W Mobile. Range 10-15mi. Bluetooth 802.11 WiMax LMDS & % I-16$ ' DIGITAL MODULATION Narrowband (Bandwidth on the order of data rate). Wideband (Spread-spectrum, bandwidth much larger than data rate). & % I-17 $' NARROWBAND MODULATION GOALS Maximize data rate given a limited portion of RF bandwidth Achieve low error probabilities Combat fading of the communication channel Deal with nonlinear ampliers Design a multiple-access scheme & % I-18$ ' MAXIMIZING DATA RATE AND CONSTRAINING BANDWIDTH Send multiple bits per symbol (MPSK, QAM) Filter transmitted signal Adjust the shape of the basic transmitted pulse & % I-19 ' $ACHIEVING LOW ERROR PROBABILITIES Increase transmitted power Choose a signal constellation with a large minimum distance Utilize channel error control coding techniques & % I-20$ ' COMBATING FADING Increase transmitted power Choose a signal constellation with a large minimum distance Utilize channel error control coding techniques Increase the bandwidth of the signal Utilize spatial diversity with antenna arrays & % I-21 ' $DEALING WITH NONLINEAR AMPLIFIERS Utilize constant-envelope signals Smooth phase transitions of transmitted signals Minimize peak-to-average envelope variations after ltering & % I-22 '$ DESIGNING A MULTIPLE-ACCESS SCHEME Implement time-division multiple-access (TDMA) Implement frequency-division multiple-access (FDMA) Implement a random access scheme & % I-23 ' $& % Figure 1: Multiple-Access Techniques I-24$ ' FDMA/TDMA Figure 2: Time/Frequency Multiple-Access & % I-25 ' Spread-Spectrum $Spread-spectrum is a form of modulation that uses considerably more bandwidth than that usually required to transmit at a certain data rate over simple channels (additive Gaussian noise channel). Spread-spectrum was originally developed for communication in a hostile jamming environment but in the last decade been considered for environments such as fading channels, multipath channels, and multiple-access channels. Commercial Applications Satellite Communications Indoor Wireless Communication (Bluetooth) Urban Radio (Cellular Radio) Power Line Transmission Optical Fiber & % I-26 ' Spread-Spectrum (cont.)$ The basic idea of spread-spectrum is that since the available bandwidth is much larger than necessary to transmit the data, the signal can be hidden in the large bandwidth available. More insight into this is gained by viewing signals as points in a space of time-limited and bandwidth limited signals. The number of dimensions in this space is proportional to the time bandwidth product. Time Bandwidth Product Unspread System & 1 Spread System 30-10000 % I-27 $' Spread-Spectrum (cont.) If the spread signal can occupy any of many thousand dimensions, an interferer would not know in which dimension to concentrate his noise. As such, his signal must be spread over a all dimensions thus reducing the power level in any one of the dimensions (in particular, the one where the signal is hidden). & % I-28 ' Spread-Spectrum (cont.)$ There are several different techniques for spreading a signal. These include: Frequency Hopping (FH) Direct-Sequence (DS) Time-Hopping (TH) Chirp Hybrid combinations of the above. We will only discuss FH and DS techniques. Each of these techniques have advantages and disadvantages depending on the situation. & % I-29 $' Block Diagram of a DS-CDMA system f f Data Narrowband f Modulation Spreading Encoder f Narrowband Data Spreading Code Decoder Demodulation Code f & % I-30$ ' Conceptual View of DS with Jamming Jamming Signal f Data f Narrowband Modulation f Spreading Code Encoder Narrowband Demodulation f Data Spreading Code Decoder f & % I-31 ' Spread-Spectrum (cont.) $DS Suffers from the near-far problem (with conventional receivers). Can usually be demodulated coherently. Works well with multipath fading. FH Difcult to coherently demodulate. Able to cope with the near-far problem. Less resistant to multipath fading. Hybrid systems try to get the advantages of each of the & % I-32 '$ component systems without the disadvantages. JTIDS (Joint Tactical Information Distribution System) own on AWACS planes uses a combination of FH, DS and TH (along with error correction coding). & % I-33 ' Spread-Spectrum (cont.) $There is a digital cellular telephone standard (IS-95) that is a form of DS with power control to eliminate the near-far problem. There is a digital cellular telephone standard (GSM) that frequency hopped with coding to compensate for the fading problem. Spread-Spectrum has been developed for indoor wireless data networks. Both DS and FH have been considered with the majority of the systems being DS. In a packet radio network without base stations, power control is not possible so FH is a better candidate. The SINCGARS radio is a packet radio network utilizing FH spread-spectrum. The Blue & % I-34 '$ tooth system is also a frequency hopped radio. & % I-35 $' Other Topics Ultrawideband radio Space Time Processing & % I-36 '$ Important Parameters in a Wireless Communications Sy Power or Energy: Clearly the more power available the more reliable communication is possible. However, the goal is to reduce the required transmission power so that talk time is maximized. Data Rate: The goal is large data rates. However, for a xed amount of power as the data rate increases the energy transmitted per bit will decrease because of decreased transmission time for each bit. In addition if the data rate increases then the amount of intersymbol interference will increase. A wireless channel typically has an impulse & % I-37 ' $response with some delay spread. That is, the received signal is delayed by different amounts on different paths. The signal corresponding to a particular bit received with the longest delay with interfere with the signal corresponding to a different bit with the shortest delay. The larger the number bits that are interfered with the more difcult it is to correct for this interference. & % I-38$ ' Important Parameters (cont.) Bandwidth: This is the amount of frequency spectrum available for use. Generally the FCC allocates spectrum and provides some type of mask for which the radios emissions must fall within. The larger the bandwidth the more indendent fades accross frequencies and thus better averaging is possible. Error Probability: Data communication requires smaller error probability than voice transmission. Usually we are interested in either bit error probability or packet error probability. & % I-39 ' Important Parameters (cont.) $Delay Spread (Coherence Bandwidth) The delay spread of a channel measures the differential delay between the longest signicant path and the shortest signicant path in a channel. The delay spread is inversely related to the coherence bandwidth which indicates the minimun frequency separation such that the response at the two different frequencies is independent. Coherence Time (Doppler Spread) This is related to the vehicular speed. The correlation time measures how fast the channel is changing. If the channel changes quickly it is hard to estimate the channel response. However a quickly changing channel also ensures that a deep fade does not last too long. The Doppler spread is the frequency characteristics of the channel impulse response and it is inversely related to the correlation time. & % I-40$ ' Important Parameters (cont.) Delay Requirement Larger delay requirements allow for larger number of fades to be averaged out. Complexity More complexity usually implies better performance. The trick is to get the best for less. & % I-41 ' Communication System Coat of Arms $There are many different functions in a digital communication system. These are represented in the block diagram shown below. S ource S ource Encoder M odulator C hannel C hannel D ecoder D ecryption Encryption D em odulator C hannel Encoder "S up e r C hanne l" S ource D ecoder Figure 3: Block Diagram of a Digital Communication System & % I-42 ' Coat of Arms$ Source Encoder: Removes redundancy from the source data such that the output of the source encoder is a sequence of symbols from a nite alphabet. If the source produces symbols from an innite alphabet than some distortion must be incurred in representing the source with a nite alphabet. If the rate at which the source produces symbols is below the entropy of the source than distortion must be incurred. Encryption Device Transforms input sequence {Wk } into an output sequence {Zn } such that knowledge of {Zn } alone (without a key) makes calculation of {Wl } extremely difcult (many years of CPU time on a fast computer). Channel Encoder: Introduces redundancy into data such that if there are some errors made over the channel they can be corrected. & % I-43 ' $Note: The source encoder removes unstructured redundancy from the source data and may cause distortion or errors in a controlled fashion. The channel encoder adds redundancy in a structured fashion so that the channel decoder can correct some errors caused by the channel. Modulator: Maps a nite number of messages into a set of distinguishable signals so that at the channel output it is possible to determine which signal in the set was transmitted. Channel: Medium by which signal propagates from transmitter to receiver & % I-44 ' Examples of communication channels:$ Noiseless channel (very good, but not interesting). Additive white Gaussian noise channel (classical, for example the deep space channel is essential an AWGN channel). Intersymbol interference channel (e.g. the telephone channel) Fading channel (mobile communication system when transmitters are behind buildings, Satellite systems when there is rain on the earth). Multiple-access interference (when several users access the same frequency at the same time). Hostile interference (jamming signals). Semiconductor memories (RAMs, errors due to alpha particle decay in packaging). Magnetic and Optical disks (Compact digital disks for audio and for read & % I-45 ' $only memories, errors due to scratches and dust). & % I-46 ' Demodulator: Processes the channel output and produces an estimate of the message that caused the output.$ Channel Decoder: Reverses the operation of the channel encoder in the absence of any channel noise. When the channel causes some errors to be made in the estimates of the transmitted messages the decoder corrects these errors. Decryption Device: With the aid of a secret key reverses the operation of the encryption device. With private key cryptography the key determines the method of encryption which is easily invertible to obtain the decryption. With public key cryptography there is a key which is made public. This key allows anyone to encrypt a message. However, even knowing this key it is not possible to reverse this operation (at least not easily) and recover the message from the encrypted message. There are some special properties of the encryption algorithm known only to the decryption device which makes this operation easy. This is known as a trap door. Since the encryption key need not be kept secret for the & % I-47 ' $message to be kept secret this is called public key cryptography. & % I-48 '$ Source Decoder: Reverse the operation of the source encoder to determine the most probable sequence that could have caused the output. Often the modulator-channel-demodulator are thought of as a super channel with a nite number of inputs and a nite or innite number of outputs. & % I-49 $' Fundamental Tradeoffs More than 50 years ago Claude Shannon (U of M EE/Math graduate) determined the tradeoff between data rate, bandwidth, signal power and noise power for reliable communications for an additive white Gaussian noise channel. Let W be the bandwidth (in Hz), R be the data rate (in bits per second), P be the received signal power (in watts) and N0 /2 the noise power spectral density (in watts/Hz) then reliable communication is possible provided P ). R < W log2 (1 + N0W & % I-50 ' Let Eb be the energy transmitted per bit of information. Then$ Eb = P/R or P = Eb R. Using this relation we can express the capacity formula as R/W < log2 (1 + Eb R ). N0 W Inverting this we obtain 2R/W 1 . Eb /N0 > R/W The interpretation is that reliable communication is possible with bandwidth efciency R/W provided that the signal-to-noise ratio Eb /N0 is larger than the right hand side of the above equation. Usually energy or power ratios are expressed in dBs. The conversion is & Eb /N0 (dB) = 10 log10 (Eb /N0 ). % I-51 $' Capacity 20 18 16 14 Eb/N0 (dB) 12 10 8 6 4 2 0 2 1 10 & 0 10 Rate (bits/second/Hz) 1 10 % I-52$ ' Notes The capacity formula only provides a tradeoff between energy efciency and bandwidth efciency. Complexity is essentially innite, as is delay. The model of the channel is rather benign in that no signal fading is assumed to occur. & % I-53 $' Figure 4: Claude Elwood Shannon & % I-54$ ' & Figure 5: Claude Elwood Shannon % I-55 $' & Figure 6: Claude Elwood Shannon % I-56$ ' & Figure 7: Claude Elwood Shannon % I-57 $' & Figure 8: Claude Elwood Shannon % I-58$ ' DOING We want to design a transmitter and a receiver. The transmitter will send short messages at random times (think of a garage door opener). The rst thing we want to design is the signal detection subsystem. That is, when the receiver is turned on and listening how does it know when the transmitter is transmitting a signal to it? In 15 minutes gure out how to design a transmitter and receiver to achieve this goal. Design the transmitted signal. Design the receiver detection system such that it gures out if the signal is present or not. & % I-59 $' Lecture Notes 2: Detection Theory Goals: Optimum Detection in AWGN Optimum Detection with Nusiance (Unwanted) Parameters & % II-1$ ' M -ary Detection Problem Consider the problem of deciding which of M hypothesis is true based on observing a random variable (vector) Z . The performance criteria we consider is the average error probability. The probability error is the probability of deciding anything except hypothesis H j when hypothesis H j is true. The underlying model is that there is a conditional probability density (mass) function of the observation Z given each hypothesis H j . Let pi (z) be the conditional probability density of observing the vector Z given hypothesis i is true. Let Ri be the set of observations where the receiver decides signal i was sent. & % II-2 $' The average probability of making an error then is given by E [Pe ] = M 1 P{decide H j |Hi true} i=0 = i j=i M 1 [1 P{decide Hi |Hi true}] i i=0 = M 1 Z M 1 i i=0 i=0 = 1 & M 1 Z i=0 Ri Ri pi (z)i dz pi (z)i dz . % II-3 '$ Decision Regions p0 (z)0 p1 (z)1 p2 (z)2 z & % II-4 $' p0 (z)0 p1 (z)1 p2 (z)2 ( ( R0 ) R1 ) R2 & z % II-5$ ' Decision Regions: Discrete Observations & % II-6 p0 ( z ) 0 , p1 ( z ) 1 ' $0.08 0.07 p0 ( z ) 0 0.06 0.05 p1 ( z ) 1 0.04 0.03 0.02 R0 R1 0.01 0 0 & 5 10 15 z 20 25 30 35 % II-7$ ' Optimum Decision Rule The decision rule that minimizes average cost assigns Z = z to Ri if pi (z)i = max p j (z) j . This is called the maximum aposteriori 0 jM 1 probability (MAP) rule. & % II-8 $' Thus for M hypotheses the decision rule that minimizes average error probability is to choose i so that pi (z)i > p j (z) j , j = i. Let pi (z) i, j = p j (z) where i = 0, 1, . . . , M 1, j = 0, 1, . . . , M 1. Then the optimal decision rule is: j for all j = i. Choose i if i, j > i & % II-9$ ' Equivalent Decision Rule Let p(z) be an arbitrary density function that is nonzero everywhere pi (z) is nonzero then an equivalent decision rule is to assign Z = z to Ri if p j (z) pi (z) i = max j. 0 jM 1 p(z) p(z) & % II-10 $' 1 We will usually assume i = M i. (If not, we should do source encoding to reduce the entropy (rate)). For this case the optimal decision rule is Choose i if i, j > 1 j = i. In this case the MAP rule is called the maximum likelihood (ML) rule. & % II-11$ ' Two signals in additive white Gaussian noise Consider that we have two equally likely signals s0 = (s0,1 , s0,2 , ..., s0,N 1 ) and s1 = (s1,1 , s1,2 , ..., s1,N 1 ). The received vector z is the transmitted vector with additive white (independent) Gaussian noise. H0 : z j = s0, j + n j , H1 : z j = s1, j + n j , j = 0, 1, 2, 3, ..., N 1 j = 0, 1, 2, 3, ..., N 1 Let pl (z) be the conditional density of the observation z given Hl is true (l = 0, 1). & % II-12 $' So pl (z) = N 1 j=0 = (z j sl , j )2 1 exp{ } , l = 0, 1 22 2 1 2 N ||z sl ||2 }, l = 0, 1 exp{ 2 2 So 0,1 = = & ||zs0 ||2 p0 (z) exp{ 22 } = p1 (z) exp{ ||zs1 ||2 } 2 2 ||z s0 ||2 ||z s1 ||2 + } exp{ 2 2 2 2 % II-13$ ' So the optimal decision rule is 0,1 0,1 Equivalently > 1 decide H0 < 1 decide H1 log(0,1 ) > 0 decide H0 log(0,1 ) < 0 decide H1 . & % II-14 $' In this case the decision rule is then log(0,1 ) = ||z s0 ||2 ||z s0 ||2 ||z s1 ||2 ||z s0 ||2 . 2 2 2 2 < ||z s1 ||2 decide H0 > ||z s1 ||2 decide H1 . In other words choose the signal that is closest to the received vector as the signal transmitted. & % II-15 ' Optimum detection of binary signals in fading channels$ Consider a system with L antennas. Assume that the receiver knows exactly the faded amplitude on each antenna. The model of the system is yl = rl Eb + l , l = 0, 1, ..., L 1 where rl are Rayleigh distributed random variables (independent), l is a Gaussian random variable and b represents the data bit transmitted which is either +1 or -1. The random variable rl represents the fading from the transmitter to the l th antenna and has density 0, r<0 p(rl ) = r er2 /22 r 0. & 2 % II-16 $' We assume the fading at each antenna is independent. The receiver is assumed (via some estimation scheme) to know the fading amplitude exactly. In this case z = (y0 , y1 , ..., yL1 , r0 , r1 , ..., rL1 ). b & yl , l = 0, 1, ...L 1 Channel rl , l = 0, 1, ...L 1 % II-17$ ' Example For example, suppose L = 4 and y = (10, 1, 5 5) r = (2 7 3 3) Based on this observation, which signal is most likely to have been transmitted? & % II-18 $' The optimal method to combine the demodulator outputs and channel fading levels can be derived as follows. Let p1 (y0 , ..., yL1 |r0 , ..., rL1 ) be the conditional density function of y0 , ..., yL1 given the transmitted bit is +1 and the fading amplitude is r0 , ..., rL1 . The unconditional density is p1 (z) = p1 (y0 , ..., yL1 , r0 , ..., rL1 ) = p1 (y0 , ..., yL1 |r0 , ..., rL1 ) p(r0 , ..., rL1 ) The conditional density of y0 given b = 1 and r0 , is Gaussian with mean rl E and variance N0 /2. The joint distribution of y0 , ..., yL1 is the product of the marginal density functions. & % II-19$ ' The optimal combining rule is derived from the likelihood ratio = = = = = p1 (z) p1 (z) p1 (y0 , ..., yL1 , r0 , ..., rL1 ) p1 (y0 , ..., yL1 , r0 , ..., rL1 ) p1 (y0 , ..., yL1 |r0 , ..., rL1 ) p(r0 , ..., rL1 ) p1 (y0 , ..., yL1 |r0 , ..., rL1 ) p(r0 , ..., rL1 ) p1 (y0 , ..., yL1 |r0 , ..., rL1 ) p1 (y0 , ..., yL1 |r0 , ..., rL1 ) 2 L1 1 exp{ N0 l =0 (yl rl E ) ) L1 1 exp{ N0 l =0 (yl + rl E )2 ) 4 L1 = exp{ yl rl E }. N0 l =0 & % II-20 $' The optimum decision rule is to compare with 1 to make a decision. Thus the optimal rule is b=+1 > rl yl < 0. b=1 l =0 L1 Note that we do not need to know the denisty of the amplitude for this decision rule. This decision rule is called maximum ratio combining (MRC). & % II-21$ ' In the special case where there is just one antenna the optimum receiver reduces to b=+1 > r0 y0 < 0 b=1 b=+1 > y0 < 0. b=1 Thus the optimum receiver for just one antenna (and BPSK) does not need the informaiton about the received amplitude to make a (hard) decision. However, the performance depends critically on the distribution of the fading amplitude. For the Rayleigh faded case (with L = 1) the error probability is 11 Pe = 22 & E /N0 /N0 . 1+E % II-22 $' Performance (L=1): & % II-23 ' P$ 0 10 e,b 2 10 Fading 4 10 6 10 AWGN 8 10 10 10 & 0 10 20 30 40 E /N (dB) b 0 50 % II-24 ' Optimum detection of M-ary orthogonal signals for minimum bit error probability $In this example we consider the problem of detection with unwanted parameters. To illustrate consider the problem of minimizing the bit error probability in an M -ary orthogonal signal set. Let s0 (t ), ..., sM1 (t ) be orthogonal signals. 00000 s0 (t ) 00001 s1 (t ) . . . & . . = E 0 (t ) = E (t 1 ) . 11111 sM1 (t ) = E M1 (t ). % II-25 ' Let b0 , ..., bk1 be the sequence of bits determining which of the M signals is transmitted. Assume the bits are independent and equally likely.$ The receiver consists of a bank of matched lters (correlators) that generate a sufcient statistic. If signal s j is transmitted then z0 = ( j, 0) E + 0 z1 = ( j, 1) E + 1 . . . . . . . . zM1 & = ( j, M 1) E + M1 . % II-26 $' For example if M = 4 then the decision variables are shown below. Data bits =01 Data bits =10 Data bits =11 z0 = 0 z0 = 0 z1 = 1 z0 = 0 z1 = E + 1 z1 = 1 z2 = 2 z2 = 2 z1 = 1 z2 = E + 2 z3 = 3 z3 = 3 z3 = 3 Data bits =00 z0 = E + 0 & z2 = 2 z3 = E + 3 % II-27$ ' For example, suppose E = 1, 2 = 1, and the observation is z0 =1 z1 = .1 z2 = .7 z3 = .7 Is the most likely (minimum error probablity) decision for the rst bit 0 or 1? & % II-28 $' Optimum detection of M-ary orthogonal signals for minimum bit error probability Consider the detection of data bit b0 . That is, we are interested in minimizing the probability of error for data bit b0 . Let H0 be the event that b0 = 0 and H1 be the event that b0 = 1. Let z = (z0 , z1 , ...zM1 ). Then the optimal receiver must compare the two aposteriori probabilities H0 > p(z|H0 )0 < H1 & p(z|H1 )1 . % II-29$ ' To calculate p(z|H0 ) we proceed as follows. p(z|H0 )0 = p(z|b0 = 0)0 = 0 = b1 ,...,bk1 2k p(z|b0 = 0, b1 , ...bk1 ) p(b1 ) p(b2 ) p(bk1 ) b1 ,...,bk1 ( 1 1 M 1 )M exp{ (zl s(b))2 } 2 22 l =0 M /21 1 1 M 1 ( 2 )M exp{ 22 (zl (l , m) E )2 } m=0 l =0 = = & 2k M /21 1 1 M 1 2k ( )M exp{ (zl (l , m) E )2 } 2 22 l =0 m=0 % II-30 $' p(z|H0 )0 = p(z|b0 = 0)0 = M /21 1 M 1 2 1 )M exp{ 2k ( (zl 2zl (l , m) E + (l , m)E } 2 22 l =0 m=0 = = & M /21 1 1 M 1 1 M 1 2 2k ( zl } exp{E /22 } exp{ )M exp{ (zl (l , m) E } 2 22 l =0 2 l =0 m=0 M /21 M 1 zm E k ( 1 )M exp{ 1 2 } exp{E /22 } 2 z exp{ 2 }. 2 22 l =0 l m=0 % II-31$ ' Similarly p(z|H1 )1 = = p(z|b0 = 1)1 M 1 M 1 zm E 2 } exp{E /22 } k ( 1 )M exp{ 1 2 z exp{ 2 }. 2 22 l =0 l m=M /2 Notice that many of the factors in p(z|H1 )1 and p(z|H1 )1 are the same. Thus the likelihood ratio for bit b0 is p(z|H1 )1 = p(z|H0 )0 1 M=M/2 exp{ zm2 E } m . M /21 zm E m=0 exp{ 2 } The log-likelihood ratio is zm E p(z|H1 )1 ) = log( exp{ 2 }) log( log( p(z|H0 )0 m=M /2 M 1 & M /21 m=0 zm E exp{ 2 }). % II-32 $' Approximation This can be approximated by M /21 p(z|H1 )1 M 1 2 ) max (zm E / ) max (zm E /2 ). log( m=0 p(z|H0 )0 m=M /2 For the example of z0 = 1, z1 = .1, z2 = .7, z3 = .7, E = 1, 2 = 1 since e1 + e0.1 < e0.7 + e0.7 we decide the rst bit is 1. & % II-33$ ' Performance Bounds The performance of the optimum receiver in many cases is very difcult to estimate. As a result bounds and approximations to the performance are often sought. The bounds include the union bound, Chernoff bound, Gallager bound. & % II-34 ' Chernoff Bound P{X u} esu E [esX ], $s0 1, x u Let g(x) = 0, x < u. Since s 0, g(x) es(xu) . Thus P{X u} = Z u fX (x)dx = Z g(x) fX (x)dx = E [g(X )] E [es(X u) ] = esu E [esX ]. & % II-35 4 3 es(xu) 2 g(x) 1 u x$ ' Chernoff Bound Let Z be a random vector. Let H0 and H1 be two events. Let p0 (z) be the conditional density function of Z given H0 and p1 (z) be the conditional density function of Z given H1 . Theorem Pe & Z Rn 1 ps (z) p0s (z)dz. 1 % II-35 $' = P{ p1 (Z1 , . . ., Zn ) p0 (Z1 , . . ., Zn )|H0 } = p (Z , . . ., Zn ) P{ 1 1 1 |H0 } p0 (Z1 , . . ., Zn ) = Pe P{ln p1 (Z1 , . . ., Zn ) p0 (Z1 , . . ., Zn ) 0 |H0 } p (Z ) Let Y = ln 1 . Then p0 (Z ) E [esY |H0 ] = Pe = P{Y 0|H0 ] Z = = & p (z) exp s ln 1 p (z)dz p0 (z) 0 Z p1 (z) s p0 (z)dz Rn p0 (z) Rn Z ps (z) p1s (z)dz. 0 Rn 1 % II-36$ ' Example 1 pl (z) = N 1 j=0 = & (z j sl , j )2 1 exp{ } , l = 0, 1 2 2 2 1 2 N ||z sl ||2 }, l = 0, 1 exp{ 22 % II-37 $' = Z = Z = Pe,0 Z N 1 Z RN Ns RN 1 2 N RN 1 2 j =0 & ps (z) p1s (z)dz. 1 0 R ||z s1 ||2 1 exp{s } 22 2 N (1s) exp{(1 s) ||z s0 ||2 }dz 22 ||z s1 ||2 ||z s0 ||2 exp{s } exp{(1 s) }dz 22 22 [z j s1, j ]2 [z j s0, j ]2 1 exp{s (1 s) }dz j 22 22 2 % II-38$ ' Pe,0 = N 1 Z j =0 = N 1 Z j =0 = N 1 Z j =0 = N 1 Z j =0 = N 1 Z j =0 = & R 1 1 exp{ 2 [sz2 2z j ss1, j + ss2, j + (1 s)z2 + 2(1 s)z j s0, j + (1 s)s2, j }dz j j 1 j 0 2 2 R 1 1 exp{ 2 [z2 2z j (ss1, j + (1 s)s0, j ) + ss2, j + (1 s)s2, j ]}dz j 1 0 2 j 2 R 1 1 exp{ 2 [z2 2z j b j + c j ]}dz j 2 j 2 R 1 1 exp{ 2 [z2 2z j b j + b2 b2 + c j ]}dz j j j 2 j 2 R c j b2 [z j b j ]2 1 j exp{ } exp{ }dz j 2 2 2 2 2 c j b2 j exp{ 22 } j =0 N 1 % II-39 $' bj cj c j b2 j = = ss1, j + (1 s)s0, j ss2, j + (1 s)s2, j 1 0 = ss2, j + (1 s)s2, j [ss1, j + (1 s)s0, j ]2 1 0 = ss2, j + (1 s)s2, j s2 s2, j 2s(1 s)s1, j s0, j (1 s)2 s2, j 1 0 1 0 = s(1 s)s2, j + s(1 s)s2, j 2s(1 s)s1, j s0, j 1 0 = s(1 s)[s2, j + s2, j 2s1, j s0, j ] 1 0 = s(1 s)[s1, j s0, j ]2 So N 1 s(1 s)(s1, j s0, j )2 }. Pe exp{ 22 j =0 & % II-40$ ' The value of s > 0 that makes the above bound as small as possible is s = 1/2. In this case the bound becomes Pe = & N 1 exp{ j =0 (s1, j s0, j )2 } 82 2 N=01 (s1, j s0, j )2 dE (s0 , s1 ) j } = exp{ } exp{ 82 82 % II-41 $' Karhunen-Loeve Expansion Suppose z = (z0 , z1 , ...zL1 )T is zero mean (real) random (column) vector with covariance matrix K = E [zzT ]. Assume Kl ,m = E [zl zm ] < M for all l , m. Because K is a covariance matrix K = K T , and K is nonnegative denite. Nonnegative denite means that for any vector a = (a0 , a1 , ..., aL1 ) aT Ka = L1 L1 al K j,l al 0. j=0 l =0 & % II-42$ ' This is true for a covariance matrix because 0 E [( a j z j )2 ] j = E [ a j z j al zl ] j l = E [ a j z j al zl ] j = a j E [z j zl ]al j = l l a j K j,l al . j l Let v0 , v1 , ..., vL1 be the (column) eigenvectors of K with eigenvalues 0 , 1 , ..., L1 . That is & Kvl = l vl for l = 0, 1, ..., L 1. % II-43 ' Karhunen-Loeve Expansion $The Karhunen-Loeve Expansion states that there exists orthonormal eigenvectors v1 , v2 , ..., vL1 and eigenvalues i , i = 0, 1, 2, ..., L 1 such that z= L1 ni vi i=0 where ni are uncorrelated random variables with mean 0 and variance i . ni = (z, vi ) = z T vi = vT z = i L1 zm vi,m . m=0 The orthonormality implies & 1, i = m vT vm = vm vT = i i 0, i = m. % II-44 ' Karhunen-Loeve Expansion$ E [n j nk ] = E [(zT v j )(zT vk )] = E [(vT z)(zT vk )] j = vT E [zzT ]vk j = vT Kvk j = vT vk j = k vT vk j , k = 0, j=k j = k. If z0 , z1 , ..., zL1 are Gaussian then nl are Gaussian with mean zero and variance l and uncorrelated and thus are independent. & % II-45 ' Karhunen-Loeve Expansion z = L 1 l =0 zzT = L 1 l =0 K = E [zzT ] = E[ nl vl nl vl L 1 l =0 = E[ L 1 m=0 nl vl L 1 L 1 l =0 m=0 = L 1 L 1 l =0 m=0 = L 1 L 1 l =0 m=0 = L 1 l =0 = m=0 L 1 L 1 nm vT m L 1 l =0 m=0 = $nm vT ] m nl vl nm vT ] m E [nl vl nm vT ] m vl E [nl nm ]vT m vl l l ,m vT m vl l vT l l vl vT . l l & % II-46 ' Karhunen-Loeve Expansion: Example$ Suppose (z0 , z1 ) are Gaussian with mean zero and covariance matrix 3 2 . K= 2 3 The eigenvalues are 0 = 1 and 1 = 5. The eigenvectors are then 1 1 2 2 v1 = v0 = 1 1 2 2 Let nl = (z, vl ) = L1 zm vl ,m . m=1 n0 & n1 1 1 = z0 ( ) + z1 ( ) 2 2 1 1 = z0 ( ) + z1 ( ). 2 2 % II-47 ' $& % II-48$ ' Then z = n0 v0 + n1 v1 . That is, z0 z1 & 1 1 = n0 ( ) + n1 ( ) 2 2 1 1 = n0 ( ) + n1 ( ). 2 2 % II-49 $' Karhunen-Loeve Expansion The covariance matrix then can be expressed as K & = 0 (v0 vT ) + 1 (v1 vT ) 0 1 0.5 0.5 0.5 0.5 +5 =1 0.5 0.5 0.5 0.5 % II-50 ' Karhunen-Loeve Transform$ Dene K 1 = l 1 vl vT . Claim l l K 1 vm = 1 vm . m Proof K 1 vm = ( 1 vl vT )vm l l l = = & 1 vl (vT vm ) l l l 1 vm . m % II-51 $' Karhunen-Loeve Transform: Example K 1 = 1 = & 0.5 0.5 3 5 2 5 0.5 0.5 2 5 3 5 + 1/5 0.5 0.5 0.5 0.5 % II-52$ ' Karhunen-Loeve Transform 1/2 Dene K 1/2 = l l vl vT . Claim l 1/2 K 1/2 vm = m vm . Proof K 1/2 vm = ((l )1/2 vl vT )vm l l = (l )1/2 vl (vT vm ) l l 1/2 = m vm . & % II-53 $' Karhunen-Loeve Transform: Example K 1/2 0.5 0.5 0.5 0.5 + 5 = 1 0.5 0.5 0.5 0.5 = & 1+ 5 2 1 5 2 1 5 2 1+ 5 2 . % II-54 Claim: Proof: (K 1/2 s, K 1/2 z) = (s, K 1 z). (K 1/2 s, K 1/2 z) = (K 1/2 s)T K 1/2 z = = = L1 1 l l =0 L1 L1 l =0 m=0 L1 1T l s l =0 L1 T = s [ l =0 (vl vT s)T l L1 1 (vm vT z) m m m=0 1 sT vl vT vm vT z m l m l 1 vl vT z l 1T vl vl ]z = (s, K 1 z). l$ ' Likelihood Ratio Consider two conditional densities. p1 (z) = p0 (z) = (2)1/2 det(K ))1/2 exp{1/2(z s1 )T K 1 (z s1 )} (2)1/2 det(K ))1/2 exp{1/2(z s0 )T K 1 (z s0 )} p1 (z) p0 (z) p1 (z) ) p0 (z) = 1/2[(z s0 )T K 1 (z s0 ) (z s1 )T K 1 (z s1 )] = log( = exp{1/2(z s1 )T K 1 (z s1 )} exp{1/2(z s0 )T K 1 (z s0 )} 1/2[(z s0 )T K 1/2 K 1/2 (z s0 ) (z s1 )T K 1 (z s1 )] = = 1/2[(K 1/2 (z s0 ))T (K 1/2 (z s0 )) (z s1 )T K 1 (z s1 )] 1/2[||(K 1/2 z K 1/2 s0 )||2 ||(K 1/2 z K 1/2 s1 )||2 ] The optimum receiver rst processes the received signal by whitening (K 1/2 z) and then nding the corresponding signal (K 1/2 s0 or K 1/2 s1 ) that is closest in Euclidean distance to the whitened received signal. & % II-54 $' Karhunen-Loeve Expansion Suppose n(t ), t [a, b] is a zero mean real random process with covariance function K (s, t ). Assume K (s, t ) < M , K (s, t ) = K (t , s), K (s, t ) is nonnegative denite (as would be the case for a covariance function) and that a and b are nite. The Karhunen-Loeve expansion states that there exist eigenfunctions i (t ) and eigenvalues i such that n(t ) = ni i (t ) i=1 where ni are random variables with mean 0 variance i and E [ni n j ] = 0 ni , n j independent (n(t ) is real). & % II-55$ ' Furthermore i (t ) are a complete orthonormal set. The eigenfunctions satisfy Z K (s, t )i (t )dt = i i (s) K (s, t ) = i (s)i (t ). m=0 The random variables are determined as ni = & Z n(t )i (t )dt . % II-56 ' $Karhunen-Loeve Expansion Note that ni , i = 0, 1, ... are uncorrelated. E [ni n j ] = Z E [ n(t )i (t )dt t t = = ZZ s ZZ s t = = n(s) j (s)ds] i (t )K (t , s) j (s)dsdt i (t ) Z i (t ) j j (t )dt j Z t = s i (t )E [n(t )n(s)] j (s)dsdt Z t = Z t Z s K (t , s) j (s)dsdt i (t ) j (t )dt i , i= j 0, i= j If n(t ) is Guassian then ni are Gaussian and because they are uncorrelated they are also independent. & % II-57$ ' Signals as Vectors Goals: Signals as Vectors, Noise as Vectors Optimum Detection in AWGN & % II-58 $' Decomposition of Signal and Noise Given a set of signals s0 (t ), ..., sM1 (t ) there exists a set of orthonormal signals 0 (t ), 1 (t ), ..., N 1 (t ) with N M such that si (t ) = N 1 si,m m (t ) m=0 & % II-59 Vector to Waveform 0 (t ) si,0 1 (t ) Serial si si,1 + to Parallel si,N 1 N 1 (t ) si (t width Waveform to Vector (t ) 0 R s(t ) (t )dt 0 si,0 R s(t ) (t )dt 1 si,1 (t ) 1 si (t ) Parallel to Serial 1 (t ) N R s(t ) (t )dt si,N 1$ ' Decomposition of Noise For any complete orthonormal set of signals 0 (t ), 1 (t ), ... we can represent a noise process as random variables and deterministic orthonormal functions n(t ) = nm m (t ) m=0 nk = & Z n(t ) (t )dt k % II-61 ' $Decomposition of Noise 3 n(t), \hat n(t) 2 1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 time 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 l 60 70 80 90 100 0.4 n l 0.2 0 0.2 0.4 & % II-62$ ' Decomposition of Signal and Noise Consider a communication system that transmits one of M signals. s0 (t ), ..., sM1 (t ) in additive white Gaussian noise. Then given si (t ) was transmitted the received signal is z(t ) = si (t ) + n(t ) = (si,m + nm )m (t ) m=0 Dene zm = si,m + nm . Then z(t ) = zm m (t ) m=0 & % II-63 $' We can determine the (random) variable zm by zm = & Z z(t ) (t )dt m % II-64$ ' Example s0 (t ) = ApT (t ) s1 (t ) = ApT (t ) Let l (t ) = 1 T exp{ j2l f0t } pT (t ) where f0 = 1/T . Then E 0 (t ) s1 (t ) = E 0 (t ) s0 (t ) = n(t ) = nm m (t ) m=0 & % II-65 ' z(t ) = $zm m (t ) m=0 = z(t ) (t )dt m = zm Z Z (si (t ) + n(t )) (t )dt m = si,m + nm Note that we can recover completely z(t ) if we know the coefcients zm , m = 0, 1, .... So the optimal decision based on observing z0 , z1 , ... is also the optimal decision based on observing z(t ). Given signal si (t ) is transmitted we can determine the probability density of zm as follows. First, zm is Gaussian since it is the result of integrating Gaussian noise. Second the mean of zm is si,m and the variance is N0 /2. & % II-66$ ' So the probability density of zm conditioned on signal i transmitted (event Hi ) is pi (zm ) = = fzm |Hi (zm ) 2 1 (zm si,m )2 } exp{ 2(N0 /2) N0 /2 Next note that zm is (conditionally) independent of zm for m = n. Thus k fz0 ,z1 ,...,zk |Hi (x0 , x1 , x2 , ..., xk ) = & k m=0 m=0 fzm |Hi (xm ) = pi (xm ) % II-67 $' Additive White Gaussian Noise Consider three signals in additive white Gaussian noise. For additive white Gaussian noise K (s, t ) = N0 (t s). Let {i (t )} 0 be any complete i= 2 orthonormal set on [0, T ]. Consider the case of 3 signals. Find the decision rule to minimize average error probability. First expand the noise using orthonormal set of functions and random variables. n(t ) = ni i (t ) i=0 where E [ni ] = 0 and Var[ni ] = N0 /2 and {ni } 0 is an independent identically i= distributed (i.i.d.) sequence of random variables with Gaussian density functions. & % II-68$ ' Let s0 (t ) = 0 (t ) + 21 (t ) s1 (t ) = 20 (t ) + 1 (t ) s2 (t ) = 0 (t ) 21 (t ) Note that the energy of each of the three signals is the same, i.e. RT 2 2 0 si (t )dt = ||si || = 5. Then we have a three hypothesis testing problem. H0 : z(t ) = s0 (t ) + n(t ) = (s0,i + ni )i (t ) i=0 H1 : z(t ) = s1 (t ) + n(t ) = (s1,i + ni )i (t ) i=0 H2 : z(t ) = s2 (t ) + n(t ) = (s2,i + ni )i (t ) i=0 & % II-69 $' Let zi = Z z(t )i (t )dt . If H j is true then zi = s j,i + ni , i = 0, 1, .... The decision rule to minimize the average error probability is given as follows Decide Hi if i pi (z) = max j p j (r) j & % II-70$ ' Decision Regions First let us normalize each side by the density function for the noise alone. The noise density function for N variables is p(N ) (z) = & 1 2N0 /2 N exp{ 1 N 1 2 N0 2 i=0 z2 } i % II-71 $' Decision Regions Then the optimal decision rule is equivalent to p j (z) pi (z) Decide Hi if i = max j . j p(z) p(z) & % II-72 ' Decision Regions$ As usual assume i = 1/M . Then N (N ) p0 (z) p(N ) (z) = 1 2N0 /2 exp{ 1 N 2 20 [i=0,1 (zi s0,i )2 + N 1 z2 ]} i=2 i N 1 2N0 /2 exp{ 1 N 2 20 1=0 z2 + N 1 z2 } i i i=2 i 1 = exp{ [ (zi s0,i )2 z2 ]} i N0 i=0,1 = exp{+ 1 [2z1 + 4z2 5]}. N0 Now since the above doesnt depend on N we can let N and the result is the same, i.e. (N ) & p0 (z) 1 p0 (z) = lim (N ) = exp{+ [2z1 + 4z2 5]}. N p p(z) N0 (z) % II-73 $' Similarly p1 (z) 1 = exp{+ [4z1 + 2z2 5]} p(z) N0 1 p2 (z) = exp{+ [2z1 4z2 5]}. p(z) N0 & % II-74$ ' Decision Regions & % II-75 ' $2 (t ) s0 (t ) s1 (t ) 1 (t ) s2 (t ) & % II-76$ ' Decision Regions & % II-77 $' 8 6 4 2 0 2 4 6 8 8 & 6 4 2 0 2 4 6 8 % II-78$ ' Likelihood Ratio for Real Signals in AGN Assume two signals in Gaussian noise. H0 : z(t ) = s0 (t ) + n(t ) H1 : z(t ) = s1 (t ) + n(t ) Goal: Find decision rule to minimize the average error probability. Let n(t ) have covariance K (s, t ) with eigenfunction i (t ) and eigenvalues i . We assume that n(t ) is a zero mean Gaussian random process. The eigenfunctions i are orthonormal functions and i real numbers such that (see Appendix) Z & K (s, t )i (t )dt = i i (s) % II-79 $' Likelihood Ratio n(t ) = ni i (t ) i=0 Assume s j (t ) has nite energy we have s j (t ) = s j,i i (t ). i=0 Thus H j : z(t ) = (s j,i + ni )i (t ) i=0 zi = s j,i + ni , & i = 0, 1, 2, ... % II-80 Dene j,i (N ) = p j (z0 , z1 , z2 , . . . , z N ) . pi (z0 , z1 , z2 , . . . , zN ) j,i (z(t )) = lim ji (N ) N where zi is Gaussian mean s j,i variance i . p j (zi ) = N p j (z) = p j (zi ) = i=0 1 1 exp (zi s j,i )2 2i 2i n ( i=0 2i ) 1 1 N (zi s j,i )2 exp 2 i=0 i$ ' N j,l (N ) = pN (z) j pN (z) l ( = i=0 N N 2i )1 exp 1 2 (zi s j,i )2 i i=0 N (zi sl ,i )2 1 1 ( 2i ) exp 2 i i=0 i=0 1N 1 2 = exp [zi 2zi s j,i + s2,i z2 + 2zi sl ,i s2,i ] j i l 2 i=0 i 1N 1 2 = exp [s j,i s2,i + 2zi (sl ,i s j,i )] . l 2 i=0 i & % II-80 ' Let s j,i s j,i i (t ) q j (t ) = lim i (t ) = N i i=0 i i=0 N $Then Z z(t )q j (t )dt = = s j (t )q j (t )dt = zl l (t ) i=0 l =0 zi s j,i i=0 l =0 Z s j,l s j,i i (t ) l (t )dt l s j,i s j,l = i=0 l =0 l = & i=0 s j,i i (t )dt i i i=0 Z Z s2,i j i Z i (t )l (t )dt = (s j , q j ). % II-81$ ' Thus 1 j,l (z(t )) = lim j,l (N ) = exp [(s j , q j ) (sl , ql ) + 2(z, ql ) 2(z, q j )] . N 2 Note: q j (t ) is solution of the integral equation Z K (s, t )q j (t )dt = s j (s) q j (t ) = i=0 s j,i i (t ). i So q j (s) = Z K 1 (s, t )s j (t )dt q j = K 1 s j & % II-82 $' If the noise is white, then the noise power in each direction is constant (say ) and thus s j,i 1 i (t ) = s j (t ). q j (t ) = i=0 The optimal receiver then becomes 1 j,l (z(t )) = exp [(s j , s j ) (sl , sl ) + 2(z, sl ) 2(z, s j )] . 2 or equivalently 1 j,l (z(t )) = exp [||s j ||2 ||sl ||2 + 2(z, sl s j )] . 2 For equal energy signals this amounts to picking the signal with the largest correlation with the received signal. & % II-83$ ' The optimal receiver in nonwhite Gaussian noise can be implemented in a similar fashion as shown below. (s j , q j ) = (s j , K 1 s j ) = (K 1/2 s j , K 1/2 s j ) = K 1/2 s j 2 (z, q j ) = (z, K 1 s j ) = (K 1/2 z, K 1/2 s j ) Thus 1 j,l (z(t )) = exp [||K 1/2 s j ||2 ||K 1/2 sl ||2 + 2(K 1/2 z, K 1/2 (sl s j ))] . 2 It is clear then that this is just the optimal lter for signals K 1/2 s j when received in additive white Gaussian noise. This approach is called whitening because K 1/2 n will be a white Gaussian noise process. & % II-84 $' Likelihood Ratio for Complex Signals We now rederive the likelihood ratios for complex signals received in complex noise. We assume that the signals are the lowpass representation of bandpass signal and the noise is the lowpass representation of a narrowband random process. & % II-85$ ' Let H0 : z(t ) = s0 (t ) + n(t ) H1 : z(t ) = s1 (t ) + n(t ) where n(t ) has covariance K (s, t ), with eigenfunctions i (t ), eigenvalues i . Using Karhunen-Loeve expansion we have Hi : z(t ) = (si, j + n j ) j (t ) j=0 z j = si, j + n j . & % II-86 $' pi (z0 , z1 , . . ., zN ) = = p j (z0 , z1 , . . ., zN ) 2 1 (|zl s j,l | )/l N 0 e l= l 2 1 (|zl si,l | )/l N 0 e l= l exp{(|zl s j,l |2 |zl sil |2 )/l } N l =0 N = = = & exp (|zl s j,l |2 |zl si,l |2 )/l l =0 N |zl |2 + |s j,l |2 2 Re (zl s ) |zl |2 |si,l |2 + 2 Re (zl s ) j ,l i,l exp l l =0 N |s |2 |s |2 + 2Re [z (s s ) ] j ,l i,l l i,l j ,l exp . l l =0 % II-87$ ' s ji Let q j (t ) = i (t ) then i=0 i (q j (t ), s j (t )) = = Zb s j,l a l =0 l l (t ) s,k (t )dt k j k=0 |s j,l |2 /l = (s j (t ), q j (t )) l =0 (z(t ), q j (t )) = = Zb zl l (t ) a l =0 z s l j,l l =0 & l k=0 s,k j k (t )dt k . % II-88 $' So p j (z0 , z1 , . . . , z N ) ji (z(t )) = pi (z0 , z1 , . . . , zN ) = exp{[(s j , q j ) (si , qi ) + 2 Re (z(t ), qi q j )] (N ) lim ji (z(t )) = N & % II-89$ ' Note: Since we are dealing with noise that is derived from a narrowband random process we can not use the results derived for real random processes we must use the likelihood ratio for complex random process given above. For real random process the likelihood ratio is 1 ji (N ) = exp{ [(s j , q j ) (si , qi ) + 2(z, qi q j )]}. 2 & % II-90 $' For additive white Gaussian noise (real) qi (t ) = j=0 si, j j (t ) j = = & 2 N0 si, j j (t ) j=0 2 si (t ) N0 % II-91$ ' So the likelihood ratio (for real signals) becomes j,l p j (z0 , z1 , . . . , z N ) 12 2 = lim = exp ((s j , s j ) (sl , sl )) + 2 (z, sl s j ) N pl (z0 , z1 , . . . , zN ) 2 N0 N0 1 = exp [ s j N0 2 2(z, s j ) z 1 = exp [ s j z N0 & 2 2 sl z 2 ] ( sl 2 2(z, sl ) z 2 )] Hj > < 1. Hl % II-92 $' Assume j = rule then is 1 M j = 0, 1, 2, . . . , M 1. Then = 1. An equivalent decision sj z 2 sj r sl z Hl 2> < Hj Hl 2> <0 Hj sl z 2 . The optimum decision rule for additive white Gaussian noise is then to choose i if si z & 2 = min 0 jM 1 s j z 2. % II-93$ ' Example: M orthogonal signals in additive white Gaussia Consider the optimum receiver for M -ary orthogonal signals and the associated error probaiblity. Assume the M signals are equienergy signals and equiprobable. The decision rule derived previously for AWGN is Decide Hi if ||si z||2 = min 0 jM 1 ||s j z||2 . Now since the M signals are orthogonal and equienergy we can write this as ||s j z||2 = ||s j ||2 2(s j , z) + ||z||2 . The rst term above is constant for each j as is the last term. Thus nding the minimum is equivalent to nding the maximum of (s j , z). Thus the receiver should compute the inner product between the M different & % II-94 $' signals and nd the largest such correlation. If the signals are all of duration T , i.e. zero outside the interval [0, T ] then this is also equivalent to ltering the received signal with a lter with impluse response s j (T t ), sampling the output of the lter at time T and choosing the largest as shown below. & % II-95$ ' Likelihood Ratio for Real Signals in AWGN Assume two signals in Gaussian noise. H0 : z(t ) = s0 (t ) + n(t ) H1 : z(t ) = s1 (t ) + n(t ) Goal: Find decision rule to minimize the average error probability. Let n(t ) autocovariance function K ((s, t ) = N0 (t s). We assume that n(t ) is 2 a zero mean white Gaussian noise random process. & % II-96 $' Karhunen-Loeve Expansion By Karhunen-Loeve expansion n(t ) = nm i (t ) m=0 where ni are Gaussian random variables with mean 0 variance N0 and 2 E [nm nk ] = 0, m = k. Thus nm and nk are independent. Since {m (t ); m = 0, 1, ...} is a complete orthonormal set and we assume s j (t ) has nite energy we have s j (t ) = m=0 & s j,m m (t ) = N 1 s j,m m (t ). m=0 % II-97$ ' This last equality is because we only need a nite (N M ) orthonormal waveforms to represent a set of M signals. Equivalently s j,i = 0 for i N . Thus H j : z(t ) = (s j,m + nm )m (t ) m=0 zm = s j,m + nm , & m = 0, 1, 2, ... % II-98 $' Dene p j (z0 , z1 , . . . , z L ) j,i (L) = . pi (z0 , z1 , . . . , zL ) j,i (z(t )) = lim j,i (L) L where zm is Gaussian with mean s j,m variance N0 /2. & % II-99$ ' p j (zm ) = L p j (r) = p j (zm ) 1 1 exp (zm s j,m )2 N0 N0 L = m=0 ( m=0 L j,l (L) = pL (r) j pL (r) l 1L exp (zm s j,m )2 N0 m=0 ( ( = N0 ) 1 N0 )1 exp N 0 m=0 L m=0 = = & N0 ) 1 1 1 exp N0 1 exp N0 exp 1 N0 L (zm s j,m )2 m=0 L (zm sl,m )2 m=0 L [z2 2zm s j,m + s2j,m z2 + 2zm sl,m s2,m ] m m l m=0 L [s2j,m s2,m + 2zi (sl,m s j,m )] l . m=0 % II-100 $' If we take the limit as L we get j,l (z(t )) = exp 1 (E0 E1 + 2(z, sl s j )] . N0 1 j,l (z(t )) = exp [(s j , s j ) (sl , sl ) + 2(z, sl ) 2(z, s j )] . N0 or equivalently 1 j,l (z(t )) = exp [||s j ||2 ||sl ||2 + 2(z, sl s j )] N0 1 = exp [||z s j ||2 ||z sl ||2 ] N0 The optimum decision rule for additive white Gaussian noise is then to choose i if si z 2 = min s j z 2. & 0 jM 1 % II-101 '$ Demodulator 0 (t ) R z(t )0 (t )dt z0 R z(t )1 (t )dt z1 1 (t ) z(t ) Find si with smallest ||z si ||2 N 1 (t ) & R z(t )N 1 (t )dt zN 1 % II-102 ' $Example: M equal energy signals Now consider the optimum receiver for M -ary equally likely signals and the associated error probability. Assume the M signals are equienergy signals and equiprobable. The decision rule derived previously for AWGN in this case simplies to Decide Hi if ||si z||2 = min 0 jM 1 ||s j z||2 . Now since the M signals are equienergy we can write this as ||s j z||2 = ||s j ||2 2(s j , z) + ||z||2 . The rst term above is constant for each j as is the last term. Thus nding the minimum is equivalent to nding the maximum of & (s j , z). % II-103$ ' Thus the receiver should compute the inner product between the M different signals and nd the largest such correlation. If the signals are all of duration T , i.e. zero outside the interval [0, T ] then this is also equivalent to ltering the received signal with a lter with impulse response s j (T t ), sampling the output of the lter at time T and choosing the largest. & % II-104 ' $Demodulator (Equal Energy Case) s0 (t ) R z(t )s0 (t )dt R z(t )s1 (t )dt s1 (t ) z(t ) (z, s0 ) (z, s1 ) Find si with largest (z, s i ) sM1 (t ) & R z(t )sN 1 (t )dt (z, sM1 ) % II-105$ ' Notes about Optimum Receiver in AWGN Consider the case of equally likely signals (0 = ... = M1 = 1/M ). The optimum receiver rst maps the received signal into a N dimensional vector. (z(t ) r). The decision region is determined by the perpendicular bisectors of the signal points. Then the receiver nds which signal is closest (in Euclidean distance) to the received vector. (Find i for which z Ri ). & % II-106 ' Example s0 (t ) T /3 2T /3 s2 (t ) T t s1 (t ) T /3 2T /3 & $T /3 2T /3 T t T t s3 (t ) T t T /3 2T /3 % II-107 '$ Orthonormal Basis Functions 0 (t ) 1 (t ) T T /3 t T t 2T /3 T t 2 (t ) 2T /3 & % II-108 $' Signal Vectors s0 s1 = (1, +1, 1) s2 = (+1, 1, 1) s3 & = (+1, +1, +1) = (1, 1, +1) % II-109 '$ Optimum Receiver 1 0 (t ) z(t ) R z(t )0 (t )dt z0 z(t )1 (t )dt z1 1 (t ) R smallest ||z si ||2 2 (t ) & Find si with R z(t )2 (t )dt z2 % II-110 ' $Optimum Receiver 2 s0 (t ) R z(t )s0 (t )dt R z(t )s1 (t )dt R z(t )s2 (t )dt R z(t )s3 (t )dt s1 (t ) z(t ) & s3 (t ) (z, s 1 ) Find si with s2 (t ) (z, s 0 ) (z, s 2 ) largest (z, s i ) (z, s 3 ) % II-111 '$ Optimum Receiver 3 t = T , 2T , 3T z(t ) Find si with h(t ) = pT (t ) largest Y (t ) (z, s i ) z0 = Y (T ) = z1 = Y (2T ) = Z z2 = Y (3T ) = & Z Z z(t )0 (t )dt = ZT z(t )1 (t )dt = Z 2T z(t )dt z(t )2 (t )dt = Z 3T z(t )dt z(t )dt 0 T 2T % II-112 ' Simplied Calculation $(z, s0 ) = +z0 + z1 + z2 (z, s 1 ) = z0 + z1 z2 (z, s2 ) = +z0 z1 z2 (z, s 3 ) = z0 z1 + z2 First calculate x0 , x1 , x2 , x3 as follows x0 = +z0 x1 = z0 x2 x3 & = z1 + z2 = z1 z2 % II-113$ ' Then (z, s0 ) = x0 + x2 (z, s1 ) = x1 + x3 (z, s2 ) = x0 x2 (z, s3 ) = x1 x3 Thus the calculation requires only 6 additions/subtractions. & % II-114 $' References [1] J. E. Padgett, C. G. Gunther, and T. Mattori, Overview of wireless personal communications, IEEE Communications Magazine, pp. 2841, January 1995. [2] D. C. Cox, Wireless personal communications: What is it?, IEEE Communications Magazine, pp. 2035, April 1995. [3] K. Pahlavan and A. H. Levesque, Wireless data communications, Proceedings of the IEEE, vol. 82, no. 9, pp. 13981430, September 1994. & % II-115 Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Michigan - EECS - 555$'Lecture Notes 2: Detection TheoryGoals: Optimum Detection in AWGN Optimum Detection with Nusiance (Unwanted) Parameters&amp;%II-1$'M -ary Detection ProblemConsider the problem of deciding which of M hypothesis is true based onobserving a random Michigan - EECS - 555$'Error Probability for M signalsGoals1. Exact analysis of M -ary orthogonal signals in AWGN channels.2. Gallager bound for arbitrary signals, arbitrary channel.3. Random Coding Bound.&amp;%III-1$'Error ProbabilityProblem: Determine the error pro Michigan - EECS - 555 '$Lecture Notes 4: Asymptotic PerformanceIn this lecture we discuss the asymptotic performance of signals. First weconsider the case of M signals in N dimension when transmitted over theadditive white Gaussian noise channel. We let M and N become lar
Michigan - EECS - 555
$'Lecture Notes 5: Noncoherent ReceiversGoals Derive optimum receiver for arbitrary signals in Gaussian noise with arandom phase. Determine performance of two signals in white Gaussian noise. Determine performance of M -orthogonal signals in white Michigan - EECS - 555 '$Lecture Notes 6: Basic Modulation SchemesIn this lecture we examine a number of different simple modulation schemes.We examine the implementation of the optimum receiver, the error probabilityand the bandwidth occupancy. We would like the simplest
Michigan - EECS - 555
$'Lecture 7Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetricchannel or an additive white Gaussian channel Be able to decode a turbo product code for an additive white Gaussianc Michigan - EECS - 555$'Lecture Notes 8: Trellis CodesIn this lecture we discuss construction of signals via a trellis. That is, signalsare constructed by labeling the branches of an innite trellis with signals froma small set. Because the trellis is of innite length this
Michigan - EECS - 555
'$In this lecture we examine optimum demodulation when the transmitted signalis ltered by the channel and there is additive white Gaussian noise. Theoptimum demodulator chooses the possible transmitted vector that wouldresult in the received vector ( Michigan - EECS - 555 '$Lecture Notes 10: Fading Channels ModelsIn this lecture we examine models of fading channels and the performance ofcoding and modulation for fading channels. Fading occurs due to multiplepaths between the transmitter and receiver. For example, two
Michigan - EECS - 555
'Lecture Notes 11:Direct-Sequence Spread-Spectrum Modulation$In this lecture we consider direct-sequence spread-spectrum systems. Unlikefrequency-hopping, a direct-sequence signal occupies the entire bandwidthcontinuously. The signal is obtained by Michigan - EECS - 555 EECS 555: Digital Communication TheoryWayne E. StarkCopyright c Wayne E. Stark, 20070-2Contents1Introduction1-11.Communication System Coat of Arms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22Optimum Receiver Principles1 Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 8 Yet More VHDL Start on signals reviewHandouts: Lecture notes1BureaucracyHomework/labs/handouts/officehours Lab 3 posted last night. Removed some stuff from the original Added some stuff Tried to Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 9 DFT ADC/DAC basicsHandouts: Lecture notesThe DFT part of the lecture comes fromUnderstanding Digital Signal Processingby Richard Lyons.Many figures in this presentation are from thesame book as abo Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 12 Course Surveys, Course Overview,Project stuff Finite Impulse Response Filters2/4/091Course Surveys Some things largelyconsistent Learning a lot in lab Lab takes more timethan homework Some not Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 12 HW5 Finite Impulse Response Filters2/4/091Homework 5 Homework 5 will due on Wednesday the 11th. Project proposal One per project group Coverage Define the project goal What needs to be done to a Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 14 More FIR filters Start on IIR2/9/091Admin:Lab 5 Lab 5 split in half Partly because too long Partly because I need to cover some more stuff Effects Lab 6 will be the week after break It is relat Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 15 More FIR filters Start on IIR2/11/09Last time: Due to fire, we fled Pick up from there.Internal overflow It is possible for an internal addition to beoutside of the range -1 to 1 even if outputi Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 16 IIR filter basics Overflow range of terms? Biquads2/13/091Group meetings and presentations Anyone else willing to volunteer to go thisWednesday? Need to meet with C2T2Z today if possible Weekend Michigan - EECS - 555 EECS 452Digital Signal Processing LabLecture 17 Designing a low pass IIR filter the 452 way And why we do it.2/16/09Todays lecture Is a bit different My major goal is to provide intuition aboutfilter design. lowpass biquad IIR filters specifical Michigan - EECS - 216 1Automatic ControlThe purpose of this handout is to give you a avor of the subject. If you like whatyou see, take EECS 460!Fig. 1. Basic closed-loop control system.Objective: Adjust the input to a system in order to make the output have desirablepro Michigan - EECS - 216 110.500.500.50.5110240 Michigan - EECS - 216 Control Systems For ReducedEmissions and Walking RobotsSupplement for Signals and SystemsProfessor J.W. GrizzleAutomotive Basics Fuel + Air ObjectivesPower + Emissions Minimize emissions Minimize fuel consumption Main emissions CO2 and H2O NOx Michigan - EECS - 216 +RZ WR &amp;RPSXWH D &amp;RQYROXWLRQ ,QWHJUDO1P(t-tO )0tO tO +ttOt = mathematical idealization of a narrow pulse with unit area)LJ ,PSXOVH/HWV )LUVW 5HFDOO :K\ WKH ,PSXOVH 5HVSRQVH LV ,PSRUWDQW&quot; \ W7 &gt;[ W@ /7 , KW5b [~ KW\ W7 &gt;p W@b ~ G~7KH LP Michigan - EECS - 216 1Energy and Power SignalsJ.W. GrizzleLet x(t) be a signal dened on (, ).Def:x is an energy signal ifx2 (t)dt &lt; , and when this integral isnite, it is dened to be the total energy, E , of the signal.Examples:(a) x(t) = e|t| .x2 (t)dt &lt; =e2|t| dt Michigan - EECS - 216 1How to Compute Some of the Nonstandard,Extended, or Generalized Fourier TransformsRecall: F-transform exists as a normal function if(a) x(t) is piecewise continuous(b) |x(t)|dt&lt; (absolute integrability)We will look at a few cases where the absolu Michigan - EECS - 216 Frequency Response Functionsand Transfer FunctionsProfessor J. W. GrizzleEECS DepartmentSummary of What You Have to Know Definition of the Frequency Response FunctionH ( j ) =e j h( )d Response of an LTI system to a phasor, ejty (t ) = e jt H ( j Michigan - EECS - 216 Michigan - EECS - 216 1Impulse Response of a LTI SystemJ.W. GrizzleLet y (t) = T [x, t] be a LTI system.Def: The impulse response of y (t) = T [x, t] is h(t) := T [, t]Very Important Fact (Theorem) Under certain technical conditions [thatessentially mean, whenever the im Michigan - EECS - 216 1Introduction to the Fourier TransformContents: Brief summary of key results so far How to go from Fourier series to theFourier Transform2Summary of Where We Are So Far(Signals &amp; Systems):System mapping inputs to outputs, y (t) = T [x, t], may be Michigan - EECS - 216 %DVLFV RI$0 5DGLR0RGXODWLRQ 'HPRGXODWLRQ)LJ 'RXEOH 6LGH %DQG6XSSUHVVHG &amp;DUULHU $PSOLWXGH 0RGXODWLRQ D PRGXODWLRQ V\VWHP E VLJQDO F FDUULHUG PRGXODWHG VLJQDO0RGXODWLRQ 3URFHVV XVHG WR VKLIW WKH IUHTXHQF\ RI DQ LQIRUPDWLRQ VLJQDO WR D GLIIHUHQW EDQ Michigan - EECS - 216 EECS 216Signals and SystemsSupplement for Lecture OneProfessor J.W. Grizzle(Section 2)Course Comments Attendance of lecture is expected;discussion sections are optional, but highlyrecommended. Syllabus and much more are posted on theclass web si Michigan - EECS - 216 1Spectra of Fourier SeriesConsider x(t) = ck ejkw0 t . The SPECTRUM of x(t) is a plot of |ck | and ckk=versus either k or w = kw0 . Often one just plots the magnitude.Example: x(t) = square wave,ck = k evenk odd02j k10.80.60.40.200.20.4 Michigan - EECS - 216 1Spectra of a Sqaure WaveConsider a signal written as an exponential Fourier series: x(t) = ck ejkw0 t .k=The SPECTRUM of x(t) is a plot of |ck | and ck versus either k or w = kw0 . This isillustrated here for a square wave.32x(t)10123012 Michigan - EECS - 216 1System Analysis with Fourier Series (&amp;$ #!'% &quot; Fig. 1. LTI BIBO stable system.x(t) = jkw0 tk = ck ec0 +k =1 2|ck | cos(kw0 t+ k ), where k = ckHow to compute y (t)? Let H (jw) be the frequency response function. We know thatck ejkw0 t ck
Michigan - EECS - 216
216F07-Sec001: Lecture 29/6/07ToDo Today Logistics O ffic e h o u r s E xam 2 You are here This is where we are goingOffice hours Strategy: Homework is due at thebeginning of class on TuesdaysMTWTh1 1 -1 2 P M (A J )3 :3 0 - 4 :3 0 P M ( J
Michigan - EECS - 216
Michigan - EECS - 216
Five Steps to Graphical Convolution anda Couple of Ways to Turn Old Results into NewG. WakeeldEECS 216 Fall 2007September 20, 2007As promised, please consider the following as you read the material in the book andwork through the homework problems.
Michigan - EECS - 216
EECS 216: Signals and SystemsL[f ]+L1[F ]+ab+pz+ode+ic+roc+pfeG. Wakeeld1Facts1. L[f ] : causal (e.g., strictly zero for t &lt; 0) signals only.2. LB [f ]: causal, anticausal (e.g., strictly zero for t 0, or noncausal signals.3. ROC is required for co
Michigan - EECS - 216
FRF, ODE, and Complex ExponentialsG. Wakeeldfor EECS 216F071The Frequency Response FunctionIn class on September 25, we re-visited what the output of a linear, time-invariant (LTI)system H is when the input is a complex exponential. Setting the inte
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 216
Michigan - EECS - 320
EECS 3201 Class OverviewFirst announcement Course website: ctools, EECS 320 001 F07Please read the syllabus (in course website) for other course info.Important announcement will be posted sometimes on the coursewebsite. Check frequently!No late hom
Michigan - EECS - 320
EECS 32002 Diodes and Transistors in Circuits Blackbox ModelLinear circuits in EECS 215R1iLvin(t)iCL++ vL 2iLR2CvC-No traffic control.Ku/EECS 320Diodes and transistorsDiodes and transistors add traffic controlcapabilities to the linear
Michigan - EECS - 320
EECS 32003 Diodes Toy ModelHow to implement a diode?A one way valve for current flowIdealIVVelectronPauli Exclusion Principle &amp; ElectronsElectron energy (eV)Atomvalence electronelectronE3E2E1Similar pictureholds for solidsKu/EECS 320Me
Michigan - EECS - 320
EECS 32004 Semiconductors: A General Introduction(chapter 1)Metals, semiconductors, &amp; insulatorsbandgapmetalfree moving electronsemiconductorinsulatorKu/EECS 320Semiconductors (for devices) aresingle crystalline solidsTypes of solidsAmorphous
Michigan - EECS - 320
EECS 32005 Carrier Modeling band structure(chapter 2.1-2.3.2)OutlineSemiconductor is a single crystalline solid.What is its band structure?is its band structure?Electron energystates for electrons to occupybandgapReality?semiconductorKu/EECS 3
Michigan - EECS - 320
EECS 32006 Carrier Modeling Carrier Density(chapter 2.3.3 2.3.5)CarriersCarriers are electrons or holes that canconduct (carry) currentconduct (carry) currentNotation:n = number of (conducting) electrons/cm3p = number of (conducting) holes/cm3ro
Michigan - EECS - 320
EECS 32007 Carrier Modeling Carrier Distribution(chapter 2.4 2.5)OutlineHow do carriers distribute inenergy?Carriers are not localized in thesense that they move aroundthe silicon lattice.They cannot all occupy thelowest conduction band statesl
Michigan - EECS - 320
EECS 32008 Carrier Action Drift and Diffusion(chapter 3.1 3.2)Diode model not so toy-like anymoreRelative position?ECnEFEFEVpxP-siliconN-siliconVKu/EECS 320Thermal Motion Of CarriersCarriers move randomly in crystal withzero net displace