'
Principles of Communication
$
The communication process:
Sources of information, communication channels, modulation
process, and communication networks
Representation of signals and systems:
Signals, Continuous Fourier transform, Sampling theorem,
seq
'
Random Process through a linear lter
$
A random process X (t) is applied as input to a linear
time-invariant lter of impulse response h(t),
It produces a random process Y (t) at the lter output as
shown in Figure 1
X(t)
h(t)
Y(t)
Figure 1: Transmissio
'
Definition of Bandwidth
$
"Bandwidth is defined as a band containing all frequencies between upper cut-off and lower cut-off frequencies." (see Figure 1)
- 3 dB
Bandwidth
f
l b
fu = f u -f l
BW f
Figure 1: Bandwidth of a signal Upper and lower cut-off
'
$
Modulation
Modulation is a process that causes a shift in the range of
frequencies in a signal.
Signals that occupy the same range of frequencies can be
separated
Modulation helps in noise immunity, attentuation - depends on
the physical medium
Figu
$
'
Single Side Band (SSB) Modulation
In DSB-SC it is observed that there is symmetry in the
bandstructure. So, even if one half is transmitted, the other
half can be recovered at the received. By doing so, the
bandwidth and power of transmission is redu
'
Angle Modulation
$
In this type of modulation, the frequency or phase of carrier is varied in proportion to the amplitude of the modulating signal.
c(t) Ac
t
Figure 1: An angle modulated signal If s(t) = Ac cos(i (t) is an angle modulated signal, then &
$
'
Noise Analysis - AM, FM
The following assumptions are made:
Channel model
distortionless
Additive White Gaussian Noise (AWGN)
Receiver Model (see Figure 1)
ideal bandpass lter
ideal demodulator
&
%
'
Modulated signal
s(t)
$
x(t)
BPF
Demodulator
'
Digital Modulation
$
Continuous-wave(CW) modulation (recap): A parameter of a sinusoidal carrier wave is varied continuously in accordance with the message signal. Amplitude Frequency Phase Digital Modulation: Pulse Modulation: Analog pulse modulation:
'
Uncertainity, Information, and Entropy
$
Probabilistic experiment involves the observation of the output
emitted by a discrete source during every unit of time. The source
output is modeled as a discrete random variable, S,which takes on
symbols form a
'
Information Theory and Source Coding
$
Scope of Information Theory 1. Determine the irreducible limit below which a signal cannot be compressed. 2. Deduce the ultimate transmission rate for reliable communication over a noisy channel. 3. Define Channel
'
Channel Coding
$
Channel Coding is done to ensure that the signal transmitted is recovered with very low probability of error at the destination.
Let X and Y be the random variables of symbols at the source and destination respectively. The description
'
Source Coding
$
1. Source symbols encoded in binary
2. The average codelength must be reduced
3. Remove redundancy reduces bit-rate
Consider a discrete memoryless source on the alphabet
S = cfw_s0 , s1 , , sk
Let the corresponding probabilities be
cfw_
'
Random Process: Some Examples
$
Quadrature Modulation Process Given two random variables
X1 (t) and X2 (t)
X1 (t)
= X (t) cos(20 t + )
X2 (t)
= X (t) sin(20 t + )
where 0 is a constant, and is a random variable that is
uniformly distributed over a rang
'
$
Random Process: Some Examples
A sinusoid with random phase Consider a sinusoidal signal with random phase, defined by
X(t) = a sin(0 t + ) where 0 and a are constants, and is a random variable that is uniformly distributed over a range of 0 to 2 (see
'
Random Process
$
A random process is defined as the ensemble(collection) of time functions together with a probability rule (see Figure 2)
x (t) 1
S 1 Sample Space S 2
x (t) 2
S n
x (t) n
-T
+T
&
Figure 1: Random Processes and Random Variables
%
'
x1 (t
'
Analogy between Signal Spaces and Vector Spaces
$
Consider two vectors V1 and V2 as shown in Fig. 1. If V1 is to be
represented in terms of V2
V1 = C12 V2 + Ve
(1)
where Ve is the error.
V1
C12 V
2
&
V
2
Figure 1: Representation in vector space
%
'
The
'
Representation of Signals by a set of Mutually
Orthogonal Real Functions
$
Let g1 (t), g2 (t), ., gn (t) be n real functions that are orthogonal to
each other over an interval t1 , t2 :
1
t2 t1
t2
gi (t)gj (t)dt = 0, i = j
(1)
t1
Let
1
t2 t1
&
t2
gj (t)
'
Fourier Representation of continuous time signals
$
Any periodic signal f (t) can be represented with a set of complex
exponentials as shown below.
f (t) = F0 + F1 ej0 t + F2 ej 20 t + + Fn ejnn t +
(1)
+ F1 ej0 t + F2 ej 20 t + Fn ejnn t + (2)
The exp
'
Fourier Representation of continuous time signals
$
Properties of Fourier Transforma
Translation Shifting a signal in time domain introduces linear
phase in the frequency domain.
f (t) F ( )
f (t t0 ) ejt0 F ( )
Proof:
aF
and F 1 correspond to the F or
'
Some Example Continuous Fourier transforms
$
F ( (t)
+
F ( (t) =
(t)ejt dt
Given that
+
+
f (t) (t) dt = f (0)
(t) dt
= f (0)
+
f (t) (t t0 ) dt = f (t0 )
Therefore F ( (t) = 1
&
%
$
'
Linearity of the Fourier transform
F (a1 f1 (t) + a2 f2 (t) = a1
$
'
Continuous Fourier transforms of Periodic
Functions
Fourier transform of ejn0 t Using the frequency shifting
property of the Fourier transform
ejn0 t
= 1.ejn0 t
F (ejn0 t ) = F (1) shif ted by 0
= 2 ( n0 )
Fourier transform of cos 0 t
&
%
$
'
cos 0
$
'
Sampling Theorem and its Importance
Sampling Theorem:
A bandlimited signal can be reconstructed exactly if it is
sampled at a rate atleast twice the maximum frequency
component in it.
Figure 1 shows a signal g (t) that is bandlimited.
&
%
'
$
G( )
m
'
Discrete-Time Signals and their Fourier
Transforms
$
Generation of Discrete-time signals
Discrete time signals are obtained by sampling a continuous
time signal.
The continuous time signal is sampled with an impulse train
with sampling period T
which
'
$
Z-transforms
Computation of the Z -transform for discrete-time signals:
Enables analysis of the signal in the frequency domain.
Z - Transform takes the form of a polynomial.
Enables interpretation of the signal in terms of the roots of the
polynomi
'
Properties of the Z -transform
$
1. RoC is generally a disk on the z -plane.
0 r R |z | r L
2. Fourier Transform of x[n] converges when RoC includes the
unit circle.
3. RoC does not contain any poles.
4. If x[n] is nite duration, RoC contains entire z
$
'
Basics of Probability Theory and Random
Processes
Basics of probability theory
a
Probability of an event E represented by P (E ) and is given by
NE
P (E ) =
NS
(1)
where, NS is the number of times the experiment is performed
and NE is number of times
'
Channel Coding
$
Channel Capacity Channel Capacity, C is defined as `the maximum mutual information I(X ; Y) in any single use of the channel, where the maximization is over all possible input probability distributions cfw_p(xj ) on X"
C = max I(X ; Y)