This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Fundamental of Signals
We are surrounded by man made and natural signals. In general when we talk about communications, we
are talking about transfer of desired information whether right up close or to far destinations using signals
of some sort. Communications via electronic means consists of generation of signals that contain or carry
intelligent information and its processing at the receiver to extract the intelligent information . In general a
signal is a description or relationship of one parameter to another, such as the relationship of voltage with
time for communications signals. This relationship can be discrete or continuous. The type of signal,
analog or discrete, used for communication depends on how far the signal has to travel and the medium it
will have to pass through. On the receiving site, this signal is decoded and the intelligent information
recovered. There are various distinct steps both in the generation, transfer and the receiving of these
signals, most specific to either analog or discrete signal processing. The tools for doing this processing
whether discrete or analog are governed by communications theory.
Unit
Link
Network Fig.  1.1 Study of communications can be conceptualized under unit, link and network level.
In general the field of communications can be conceptualized in three fields of study. There is the
component or the physical unit. Here the analysis consists of performance issues of indi vidual units within
the chain. We may worry about the performance of the component such as a filter under ideal or specified
conditions. Example in case of a cell phone would be design of the handset which alternately is the design
of the amplifier, antenna, filters and baseband signal processing algorithms. Then comes a single link
between a sender and a receiver. This involves looking at how the signal would go from the handset to the
base station and to the intended recipient and vice versa. These are wa veform or the signal issues. How is
the signal affected going from one unit to the next. We study modulation, channel effects, power r equired
etc. as in Fig. (1.2). The third main part is the integration of various communications link to make a
network. A cell network has many base stations, many users, how do we design a network so each is
uniquely identified and does not interfere and the network capacity is not compromised? These are network
issues and we study issues of network design, congestion issues and access control. Generation of
information Conversion to
signals
for transmission
(Coding,
Modulation) Medium through
which signals
travel (wire, air,
space) Receiving and
processing of
signal
(Demodulation
and Decoding) Convert back to
orginal form of
information Fig. 1.2 – Generation of signals in a communications link
Signal types
Most signals in nature are analog. Examples of these are sound, noise, light, heat, and electronic
communication signals going through air (or space). These signals vary continuously and the processing for
these types of analog signals is called Analog Signal Processing (ASP). Examples of naturally occurring Chapter One – Fundamentals of Signals Page 1 www.complextoreal.com discrete signals are Morse code messages, numerical counts such as stock markets and of course, the bit
streams that constitute digital messages. In communication systems discrete signals are created by
sampling continuous signals. These discrete signals are sampled versions of the analog signals as shown in
(1.3). The amplitude of each sample is same as the original analog signal at the same instant.
Difference between discrete and digital
A digital signal is a further refinement of this process. A discrete signal is any signal that has values on ly at
specific time interval. It is not defined between those times. A digital signal on the other hand is one that
only takes on a specific set of values. For a two level binary signal, we can take a sample signal which is a
discrete signal and then set it to just two levels, 1 or +1. The discrete signal then becomes a digital signal.
Digital however does not mean two levels. A digital signal can take on any number of values, usually in
power of two. The process of gathering the amplitudes in specific le vels is called quantization. A binary
signal is a special case of digital signals and digital signal is a special case of a discrete signal.
Discrete signals can be of very small duration so they are essentially like an impulse, or they can hold their
value for a certain period of time. Fig. 1.3 – A discrete signal with varying amplitude at each time instant (sample number). Fig. 1.4 – An analog signal with an equivalent digital (binary) signal (sample number). Chapter One – Fundamentals of Signals Page 2 www.complextoreal.com Fig. 1.5 – An analog signal with an equivalent digital (4level) signal (sample number).
There is another type of signal which is called a pulse train. Here the signal is train of small of continuous
impulses that lasts a very short time. The phase and amplitude of the impulse determine how it is decoded.
These pulselike signals are used in Ultra Wide Band (UWB) systems and for Pulse Position Modulation
(PPM). EKG is a good example of this type of signal occurring in nature.
We hear a lot about digital signals but most communication system s would not be possible without analog
communications. All wireless signals are analog. A signal may start out analog, is converted to digital for
modulation, and converted back to analog for radio frequency (rf) transmission and then converted back to
digital and as such both forms are needed to complete a link. A communications chain is a combination of
both of these types of signals. If you work with just the handsets (units) then you will most likely work only
in digital domain. If you work on link design or in very high powered equipment such as satellites, then
analog issues become important.
Analog signal
An analog signal is one that is defined for all time. You can have a timelimited signal, such as one that
lasts only one second but within that second it is defined for all time t. A sine wave as written by this
equation is defined for all time, t.
f (t ) sin( 2t ) (1.1) A discrete time signal which can also be timelimited, is present only at specific, usually regular inter vals.
Mathematically such signals are written as function of an integer index n, where n is the nth time tick from
some reference time. If we define T as the interval between ticks, the discrete version of the analog sine
wave is written as (1.2) and is referred to as the sampling process. The processing of quantization of the
discrete signal is called the A/D conversion.
f (n) sin(2nT ), n 0, 1, 2,... (1.2) In definition (1.2), the samples are generated every 1/T seconds. The sampling frequency is the speed at
which the samples are taken and is inverse of the sampling time.
Sampling Frequency f s 1 (1.3) Ts If a signal is sampled at 10 Hz, then the sampling period is T = 1/10 = 0.1 sec. The individual sampled
amplitude for a sampling speed of 10 Hz would be given by
an f (n) sin(2nT ) sin n (1.4) 5 Information signal Chapter One – Fundamentals of Signals Page 3 www.complextoreal.com We commonly refer to transmission of voice, music, video and data as information signals. These are the
intended signals, what we want to communicate. The process of sending these signals through a
communications chain tacks on a lot of extra information, such as in a phone call, your phone number,
timedate stamps etc. and much more that is invisible to the sender and which facilitates the transfer of this
information. These data signals are over and above the information signal and are considered overhead. The
information signal can be analog or digital. Voice and music are considered analog signals, video can be
either digital or analog. Stock market and financial data is an example of an information signal that is
digital. In most case, the analog signals are converted using analog to digital (A/D) converter by sampling it
into a discrete signal and quantization them prior to transmission. The discrete signals can have any number
of levels but binary or two level signals and then creating are most common in communication. In (1.61.8), we see three different digital signals. The levels of 1 and 1 make the first signal a binary signal, it has
just two levels. The second signal takes on four discrete values and the third takes on 8.
Information signals are referred to as baseband signals because they are at low fr equencies, often less than
50 kHz. A baseband signal can be allinformation or it can contain redundant bits making it a coded signal
but it is still at baseband and still has fairly low frequency contents. Fig. 1.6 – A binary signal Fig. 1.7 – A 3level signal Fig. 1.8 – A 4level signal
Carrier Signals
Low frequency signals do not travel well through most mediums such as wires, and wirelessly through air.
To transmit them over long distances requires a form of modification called modulation. Modulation uses
an alternate signal, one that can travel easily through the chosen med ium easily. Modulation is described as
the process of mapping the information signal on to this morecapable signal. These signals which facilitate
transfer of information over a variety of mediums are called carriers. The frequency of the carrier is
usually much higher than the information signal. Choice of a carrier is function of the medium that they
must pass through. For wired communications, they may be in Khz range and for wireless and satellites
they are in Mhz and Ghz frequencies. In United States carriers one can use are prescribed by law because
of their ability to interfere.
Chapter One – Fundamentals of Signals Page 4 www.complextoreal.com Figure 1.9 – A carrier is a pure sinusoid of a particular frequency.
A carrier is a pure sinusoid of one particular frequency and of a certain phase. It is an analog signal
assumed to be continuous and infinitely long. Carriers are produced by voltage controlled oscillators
(VCO). Phase of a carrier and even its frequency can drift or change either slowly or abruptly, and so in
reality they are not perfect. The imperfections in the carriers cause problems in removing the information
signal at the source and methods have been devised to both track and correct the carrier signal.
The carriers are carriers in that they carry the information signal. This process of “carrying the
information” is called modulation. The carrier by itself has no information because it changes in a fixed and
predicable manner and information implies change of some sort.
Modulated Signals
A modulated signal is a carrier that has been loaded with the information signal. To transfer information, it
is the modulated signal that travels from place A to B. The information signal in its original shape and form
is essentially left behind at the source. A modulated signal can have a well defined envel ope as shown here
in Fig. 1.10 and 1.12 or it can be wild looking as shown in Fig. 1.11. The shape of the modulated signal
and the medium through which it travels are very import subjects.
The process of modulation means taking either an analog or a digit al signal and turning it into an analog
signal. The difference between a digital modulation and analog modulation is the nature of the signal that is
modulating the carrier. The carrier is always analog. In digital modulations, we can see the transitions, Fig.
1.10 and 1.11, whereas in analog modulated signals the transitions are not obvious, Fig. 1.12.
Modulation is also analogous to another process called D to A, or digital to Analog Conversion. D/A is
typically done at baseband and does not require any change in the frequency of the signal, whereas
modulation necessarily implies a frequency translation. Fig. 1.10 – A modulated carrier signal (digital input) Chapter One – Fundamentals of Signals Page 5 www.complextoreal.com Fig. 1.11 – Another modulated carrier signal (digital input) Fig. 1.12 – Yet another modulated carrier signal (analog input)
Bandwidth
Bandwidth can be imagined as a frequency width, sort of the fatness of the signal. The bandwidth of a
carrier signal is zero. That is because a carrier is composed of a single frequency. A carrier signal is devoid
whereas information signals are fat with information. The more information, the larger the bandwidth of the
information signal. To convey information, an information signal needs to contain many different
frequencies and it is this span of their frequency content that is called its bandwidth. The human voice, for
example, spans in frequency from 300 Hz to 30,000 Hz. Since a single frequency would make only one
kind of tone, the range of human voice gives it its unique signature. The voice has a bandwidth of
approximately 30,000 Hz and not all of us have the same range of frequencies or amplitudes, i.e. although
the spectrum of our voice generally falls within that range, our personal badwidth will vary from this
number.
If a voice signal is modulated on to a carrier, what is the bandwidth of the modulated signal? It is still the
same, 30,000 Hz. The modulated signal just takes on the bandwidth of the information signal it is carrying
like this guy on the motor cycle. He is the modulated signal and his bandwidth just went from near zero
without the load to the size of the mattress which is his baseband signal. Chapter One – Fundamentals of Signals Page 6 www.complextoreal.com Bandwidth of
load Carrier's
bandwidth Fig. 1.13 – Bandwidth is a measure of the frequency content of a signal.
Properties of Signals
Periodicity
Carriers offer strict periodicity whereas information signals do not have this property. Communications
theory and many of the tools used to analyze signals are however based on the concept of periodicity.
Conversion of signals from time domain to frequency domain depends on this property and many other
analytical assumptions we make about signals also require periodicity. Purely periodic math applies to the
carriers whereas math used to describe the information and modulated signals uses stochastics and
information theory dealing with random signals.
First we will look at properties of periodic signals and then later at random signals.
Mathematically a discrete periodic signal is one that has the following property. f (t ) f (t T ) (1.5) Fig. 1.14 – Carriers are periodic, information signals are not.
The above signal is a sampled discrete signal despite the fact that it looks continuous. (The samples are too
close to see.) A periodic signal repeats its pattern with a period T. The pattern can be arbitrary. The value
of the signal at any one time is called a sample. The concept of periodicity follows superposition principal.
If we add a whole bunch of periodic signals, all of different frequencies and phases etc, the result is still
periodic.
The mean of a discrete periodic signal x is defined as the average value of its samples: x 1
N N 1 x n 0 n (1.6) Power and Energy of Signals Chapter One – Fundamentals of Signals Page 7 www.complextoreal.com Fig. 1.15 – The area under the period is zero for a periodic signal.
It seems that we ought to be able to say something about the area under a periodic signal. But we run into a
problem if the periodic signal is symmetrical about the xaxis, as is a sine wave; the signal has no area for
integer number of periods. The negative parts cancel the positive. So the area does not tell us much. If we
square it, we get something meaningful, something we can measure and compare between signals. Fig. 1.16 – The area under the squared period is nonzero and indicates the power of the signal.
If we square the sine wave of (1.15), the area under the squared signal (1.16) for one period is 40 units
(2*4*10/2) and the total area under this signal is 40 times the number of periods we wish to count. This is
called the energy of the signal defined by the area under the signal when squared. Ex N 1 x n 0 2 (1.7) n However, since this sum depends on the length of the signal, the energy of the signal can be very large and
as such is not a useful metric. A better measure is power of the signal. The average power of a signal is
defined as the area under the squared signal divided by the number of periods over which the area is
measured. Px Ex
1 N
N N 1 x n 0 2 (1.8) n Hence the average signal power is defined as the total signal energy divided by the signal time since N is
really a measure of time. Power of a signal is a bounded number and is a more useful quantity than signal
energy.
The root mean square (RMS) level of a signal is the square root of its average power. Since power is a
function of the square of the amplitude, the RMS value is a measure of the amplitude (voltage) and not
power.
x RMS 2
2
2
x1 x2 x3 ... n 1
N N 1 x n 0 2 (1.9) n The variance of the signal is defined as the power of the signal with its mean removed. Variance of a
signal is defined as (1.10). Chapter One – Fundamentals of Signals Page 8 www.complextoreal.com 2
x 1
N N 1 x n 0 n x 2 (1.10) For zeromean signals, the variance of the signal is equal to its power. Equation (1.10) becomes equation
(1.11) if mean is set to zero. For nonzero mean, variance equals the power minus the mean of the signal. Px
is also often called the DC power.
2 x Px x (1.11) We can also talk about instantaneous power, which is just the amplitude squared at any one moment in
time. Since the signal power is changing with amplitude, we also have the concept of peak power. Peak
power is the peak amplitude squared.
Now we define a quantity called bit energy limited over one bit period (this can be a specific number of
carrier cycles, usually more than one.) Eb Avg x 2 (t ) (1.12) Rb For a digital signal, the energy of a bit is equal to the square of the amplitude divided by the bit rate. If you
think about it carefully, it makes intuitive sense. The numerator is the power of the signal at one instant.
We take the power and we divide it by bit rate and what we get is power in a bit, which we call E b or
energy per bit. It is a way or normalizing the energy of the signal. This is a very useful parameter and used
to compare different communication designs.
Random Signals
Information signals are considered random in nature. When you talk you produce sounds that are
essentially random. You don’t repeat words in a predictable manner. So signals containing information are
also not periodic by definition. As compared to the definition of power of a periodic signal, for random
processes, we define the power slightly differently as Px E x 2 (t ) (1.13) which is simply the expected value of the mean squared value of the instantaneous amplitude. For most
communications signals, power is not a function of time and is simply the second moment or the variance
of the signal x(t). Px E x 2 (t ) Variance (1.14) This is intuitively obvious; a signal with a large variance has large power. What is interesting about this
relationship is that the variance is also equal to the value of autocorrelation of the signal at zero shift. A
zero shift means that we take a signal and multiply it by itself, sample by sample. Of course, that is the
same as squaring each instantaneous amplitude! We can write this relationship as Px Rx (0) Variance (1.15) For these simple relationships to hold for communications signals, certain conditions have t o be true about
them. One such property is called Stationarity. This means that properties of the signal stay put. This is
generally true of run of the mill communications signals involving voice data etc. but not for all. If the data
is suspected to be n ot random, then most equipment will first randomize it before transmitting.
For a signal x(t), if the expected value, E{x(t)}, does not change over time, then the signal is called a
stationery signal. If the mean and the covariance between signal samples a certain distance apart are
constant, then this type of signal which may not be strictly stationary by the above definition is called a
widesense stationary (WSS) signal.
An ensemble is one piece of a particular signal. A collection of ensembles constitutes the whole signal and
no parts of an ensemble are shared with another ensemble. If we take the average value of an ensemble, and
it turns out to be the same as the whole signal average, then this signal would be called Ergodic. Not only
is the expected value of the signal constant over time but is constant across all ensembles. Chapter One – Fundamentals of Signals Page 9 www.complextoreal.com An example of a signal not meeting these conditions is the average height of people. Height is not a
stationary signal because over time it changes since humans have been getting taller with the passage of
time. It is also not ergodic because the average of one of its ensembles (such as average height in China as
compared to average height in England) is not the same. Many signals such as average rain rate are
stationary but not ergodic. So ergodic is a more restrictive condition.
Most signals we deal with in communications are presumed to be stationary and ergodic. They often do not
meet these definitions strictly but these assumptions work well enough and give us mathematical tools to
analyze and design communication systems. Of course, there are cases that just can not be assumed as such.
Example of non stationary signals are: Doppler signal (coming from a moving source), variations of
temperature, and accelerating, fading and transient signals.
Our Fourier transformbased signal processing techniques are strictly valid only for signals that are both
stationery and ergodic but we can use these techniques for signals that don’t meet thes e criterion anyway,
as long we understand what errors these assumptions cause.
Sampling
In signal processing, the most challenging part is to receive a signal in an analog from some source and
then figuring out what was actually sent. In signal processin g terminology, we want to resolve the received
signal. Why should that be a problem? The receiver goes through a sampling process and generated
received samples. Can’t we just then connect these points and know what was sent? Yes, maybe.
Here are sampled values of an unknown signal. This is all we have, just the sampled values. What is the
frequency of the signal that was sent? The sampling time is 0.25 seconds, i.e. each sample is 0.25 seconds
apart. We connect these and get the waveform on the right of frequency 1 Hz. 1
1 0.5 0.5 0 0
0.5 0.5 1.0 1.0 0.5 1 1 Fig. 1.17 Guessing what was sent based on sampled values.
(a) Received samples, (b) Our assumption of the signal that was sent.
But wait, but what about the following. These two functions also go through the same points. Could not
they have been sent? In fact given these sampled points, infinite number of signals can pass through the
points of (1.17a).
1 1 0.5 0.5
0
.
5 0
.
5 0
.
5 0
.
5 0
.
5 0.5 0
.
5 0 0 1.0 0.5 0
.
5 0
0.5
.
5 0
.
5 0
0
1.0
.
.
5
5 0
.
5 0
.
5 0
.
5 0.5 1 0
.
5 1 Fig. 1.18 Guessing what was sent based on sampled values. Chapter One – Fundamentals of Signals Page 10 www.complextoreal.com (a) 2 Hz signal also fits data, (b) 4 Hz signal does too.
How do we unambiguously decide what was sent? We see that it could have been any number of signals
with frequency higher than 1 Hz. Given these samples there is no way we can tell which of these many
signals was sent. The only thing we can say definitely is that the signal is of frequency 1 Hz or larger. So
with this sampling rate, the highest frequency we can correctly extract from this signal is 1 Hz.
Working backwards we see that if we have samples separated by time T, then the largest frequency which
we can ambiguously resolve is 1/2T. fl arg est 1
2Ts fs
2 (1.16) This largest frequency in the signal that can be resolved using a certain sampling rate is called the Nyquist
frequency and the sampling frequency needs to be twice as large as this frequency or we can not find the
signal.
Let’s show this by example. f (t ) sin(2 (4)t ) cos(2 (6)t ) (1.17) This a simple signal that contains two sinusoids, of frequency 4 and 6 Hz. In the figure we show how the
signal looks as it is sampled at various speeds. Clearly as the speed, i.e. the number of samples obtained in
one second, decreases, the reconstructed signal begins to looks bad. On the right side for each sampled
signal is its Fourier Transform, which is kind of like a frequency detector. For each case we see that the
embedded signals are correctly detected until we get to a sampling frequency below 12 Hz, now the larger
components (6 Hz) is not detected. If we go below 8 Hz, even the 4 Hz component disappears. There is a
component at 2 Hz, but we know this is wrong and is a result of numerical artifacts. In sampling a real
signal of course, we would not know that this is an error and if we do not sample a signal fast enough, we
are liable to claim that the sampled signal is a signal of frequency 2 Hz. Chapter One – Fundamentals of Signals Page 11 www.complextoreal.com fs = 256 samples/sec fs = 128 samples/sec fs = 64 samples/sec
Signal Level, dB Amplitude, volts fs = 32 samples/sec fs = 16 samples/sec fs = 8 samples/sec Sample Number
Frequency, Hz Sample Number Fig. 1.19 – Sampling speed has to be fast enough (> 2 times the highest frequency embedded) for the
processing to detect it and reconstruct the information signal. Noisy Signals, Random Signals
Being able to decode the original signal successfully depends a lot on the understanding of noise which can
enter the link at many points. Noise comes in many flavors and its knowledge is fundamental in study of
communications theory. Noise is a non deterministic and random process. So although we can look at noise
signals in time domain, they are described instead in frequency domain . Information signals similarly are
random and can only be analyzed using information theory.
We need to know the following four properties of random signal distributions; mean, variance and
Probability Density Function (PDF), Cumulative Density Function ( CDF).
Mean
Given a random signal x(t), its mean is given by x E[ x ] x p( x) dx (1.18) Where E[ x] expected value of x and p ( x) probability density function of x
Mean is fairly easy to understand. For a voltage vs. time signal, to find the mean we take each amplitude
and then divide by the number of samples.
After the mean, comes the measure of how much the voltage varies over time. This is called variance, and
it is directly related to the power of the signal.
Variance Chapter One – Fundamentals of Signals Page 12 www.complextoreal.com E[( x x ) ] x
2 2 2 x 2 (1.19) Probability Density Function (PDF)
This concept seem to cause confusion because of several reasons; one, the word density in it, second it is
also known as the Probability Distribution Function (PDF) and third is its similarity to another important
idea, the Power Spectral Density (PSD).
PDF is the statistical description of a random signal. It has nothing to do with the pow er in the signal nor
does the word density have any obvious meaning. Let’s take the random signal (1.20). It has about 8000
point of data, one voltage value for each. The voltage varies from 18.79 to +17.88v. If we accumulate all
the amplitude values and then gather them in bins by quantizing the independent variable, which is
amplitude and plot it, we get a histogram. Fig. 1.20 – A data signal that has picked up noise.
In (1.21), All 8000 values of the amplitude have been grouped in 50 bins. Each bin size is equal to the
range of the variable, which is 18.79 +17.88 = 36.07 divided by the number of bins or 36.07/50 = 0.7214 v.
Each bin contains a certain number of samples that fall in its range. Bin in #23 shown contains 584 samples
with amplitudes that fall between voltage levels of 2.1978 v and – 1.4764 v. As amplitudes get large or
small, the number of samples gets small. This histogram appears to follow a normal distribution. Fig. 1.21 – Histogram developed from the signal amplitude values.
The histogram can be normalized, so the area under it is unity, making it a Probability Density Function.
This is done by dividing each grouping of samples by the total number of samples and the quantization
interval. For the bin #23, we divide its count by total samples (584/8000) and get 0.07042. This states that
the probability that a given amplitude is between of 2.1978 v and – 1.4764 v is 0.07042. Chapter One – Fundamentals of Signals Page 13 www.complextoreal.com Fig. 1.22 – Normalized histogram approaches a Probability Density Function
In limit, as the width of the bins becomes small, the distribution becomes the continuous Probability
Density Distribution or Probability Distribution Function.
From Wikipedia, “A probability density function can be seen as a "smoothed out" version of a histogram: if
one empirically samples enough values of a continuous random variable, producing a histogram depicting
relative frequencies of output ranges, then this histogram will resemble the random variable's probability
density, assuming that the output ranges are sufficiently narrow.’
Probability Histogram p ( x ) xi 0
N lim Ni
N xi (1.20) Where
xi the quantization of the interval of x
N = Total number of samples
N i Number of samples that fall in xi Fig. 1.23 –Probability Density Function and Cumulative Probability Function To find the probability that a value falls within a specific range x 1 to x2, we integrate over the range
specified. Symbolically:
x2 P ( x1 x x2 ) p ( x) dx
x1 Chapter One – Fundamentals of Signals (1.21) Page 14 www.complextoreal.com Normalizing this, we get P( x ) p( x) dx 1 (1.22) The PDF is always positive. The Cumulative Probability Distribution is the summation o f
area under the PDF plotted as a function of the independent variable. It is always a
positive function with range from 0 to 1.0 as shown in (1.23)
Power Spectral Density (PSD)
A very similar sounding but quite different concept is the Power Spectral Density (PSD). Fig. 1.24 –Power Spectral Density (PSD) of signal in (1.20) Power Spectral Density (PSD) of a signal is same as its power spectrum. A spectrum is a plot of the
power distribution over the frequencies in the signal. It is a frequency domain concept, where the PDF is a
time domain concept. You create the PDF by looking at the signal in time domain, its amplitude values and
their range. Power Spectral density (PSD) on the other hand is relationship of power with frequency. While
variance is the total power in a zeromean signal, PSD gives the distribution of this power over frequencies
(power/Hz) contained in the signal. It is a positive real function of frequency. The power is always positive,
whereas amplitude can be negative. Power spectral density is commonly expressed in watts per hertz
(W/Hz) or dBm/Hz. Chapter One – Fundamentals of Signals Page 15 www.complextoreal.com Fig. 1.25 – Specification of Power Spectral Density (PSD) for a specific signal The two most common random process distributions needed in communications signal analysis and design
are Uniform distribution and normal distribution.
Uniform distribution
Take a sinusoid signal defined at specific samples as shown below. Fig. 1.26 – A sinusoid.
The addition of a 10dBc uniformly distributed noise transforms this signal into (1.27). Fig. 1.27 – Transmitted signal which has picked up uniformly distributed noise distributed between 0.1 and +0.1 v.
The noise level varies from +.1 to .1 in amplitude. If quantize this noise into say 10 levels, at each sample
a 11sided dice is thrown and one of these values is picked [1 .8 .6 .4 .2 0 .2 .4 .6 .8 1] and added to the
signal as noise. The probability distribution of this noise looks as shown in (1.28) Chapter One – Fundamentals of Signals Page 16 www.complextoreal.com 1.28 – Uniformly distributed noise distributed between 0.1 and +0.1
Uniform distribution noise level is not a function of the frequency. It is uniformly random. This distribution
is most commonly assumed for narrowband signal s.
Properties of uniformly distributed noise
If a and b are the limits within which a uniformly distributed noise acts, then its properties are given by
expressions;
The Mean and variance
Mean ab (1.23) 2 Variance (b a ) 2 (1.24) 12 The PDF and the CDF of the uniform distribution is given by 1
f ( x) b a
0 a xb
xb (1.25) CDF 0
xa
F ( x) ba
1 xa
a xb
xb (1.26) Normal Distribution Chapter One – Fundamentals of Signals Page 17 www.complextoreal.com This is the most important distribution in communications. This is the classic bell shaped distribution also
called Gaussian distribution. For a zeromean distribution, its main parameter is the variance. To the signal
(1.26), a normally distributed noise of variance 0.1 has been added. Looking at it, the signal does not look
much different than (1.27) but if we plot the distribution we (1.29). We notice not all amplitudes are
equally probable. This type of noise describes thermal noise and much of the noise we see in electronic
equipment. 1.28 – Transmitted signal which has picked up normally distributed noise of variance 0.1. 1.29 – The bell shaped probability distribution of a normally distributed random variable. PDF and CDF of a normally distributed random variable for n events PDF f ( x) ( x )2 1
exp 2 2 2 (1.27) CDF 1
x 1 erf 2 2 (1.28) Mean Chapter One – Fundamentals of Signals (1.29)
Page 18 www.complextoreal.com Variance 2 (1.30) Transforms in signal processing
One word you hear a lot in signal processing is domain. The domain refers to the independent parameter
of the signal in question. When we are looking at signals with amplitude, phase, current etc vs. time, the
time is the independent variable and this signal is in timedomain. Signals in timedomain are easy to
understand. The harder one is the frequency domain, which is a transformation of time and where the
independent variable is frequency. Here we might be looking at distribution of amplitude over various
frequencies in the signal. In communications, signal processing is done in these two domain or dimensions. Fig. 1.29 – Frequency and time domain of a signal
The two domains are just two different views of the signal, one from the time perspective and the other
from the perspective of the frequency as shown in (1.29). To deal with these dimensions, going back and
forth between the time domain and frequency domain, we use transform theory. Every field has its own
special math and in communication science we use transform theory. A transform is just that, it is a
mathematical transformation of data from one type to another with the goal of making handling of data
easier and intuitive.
Log function is a form of a transform. It is used extensively in communications to ease multi plication of
large numbers. Instead of multiplying 1000 by 10000, we take logs which are 3 and 5 respectively and then
add these to determine the result, much easier than multiplying big numbers. Electronic devices like most
of us, like adding better than multiplying, so Log transform makes processing easier and faster.
Fourier Transform (FT) and Discrete Fourier Transform (DFT)
Fourier transform is based on the understanding that a periodic signal that is stationery and ergodic can be
represented by the sum of various sinusoids. Fig. (1.30b) shows the three sinusoids that form the signal of
(1.30a). The Fourier transform is just a plot of the amplitude of each of the component frequencies as in
(1.31) Chapter One – Fundamentals of Signals Page 19 www.complextoreal.com Fig. 1.30a – A periodic signal composed of three sinusoinds. Figure 1.30b  Sine wave 1 Figure 1.30c  Sine wave 2 Figure 1.30d  Sine wave 3 Fig. 1.31 – View into frequency domain a signal consisting of three sine waves of frequencies 1, 2 and
3 hz.
This fundamental algorithm allows us to look at timedomain signals in frequencydomain and viceaversa. Fourier transform make signals easier to understand and manipulate. It does that by recognizing that
frequency is analogous to time and any signal can be considered a collections of a lot of frequencies of
different amplitudes, such as the signal shown in (1.30).
Fourier transform is kind of a deconstructor. It gives us the frequency components which make up the
signal. Once we know the frequency components of the signal we can use it to manipulate it.
The Discrete Fourier transform (DFT) operates on discrete samples and is used most often in analysis
and simulation of signal behavior.
Laplace transform
Laplace transform is a general transform of which the Fourier transform is a special case. Where the
Fourier transform assumes that the target signal is successfully recreated by sum of constant amplitude
sinusoids. Laplace transform basis functions, instead are exponentially enveloped sinusoids as shown in
figure (1.32). This allows analysis of transient signals. Chapter One – Fundamentals of Signals Page 20 www.complextoreal.com An exponential is given by the general expression of x ( n) a n (1.31) A positive value of n gives a increasing exponential and a negative value a decreasing function. When this
function is multiplied by a sinusoids, we get a damped signal. Assorted combinations of these allows us to
analyze nearly any signal.
La Place transform is used most often in the analysis of transient signals such as the behavior of phase
locked loops. Here a signal may be varying in its amplitude and phase. The error signals computed for
these signals that allow correction of the incoming signal frequency ha ve the characteristics of damped
signals.
3 Amplitude 2
1
0
1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2
3
t Fig. 1.31 – A signal that is not stationery can be represented by a sum of exponential the amplitude of
which is not constant.
ZTransform
Ztransform is a discretetime counterpart of the Laplace transform. It is used for transient and timevarying signals (non stationery) but for discrete signals only. For realvalued signals, t > 0, the Laplace
transform is a generalization of the Fourier transform, whereas Ztransform is a generalization of the
Discrete Fourier transform.
Realness of signals
In communications, we often talk about complex and real signals. Initially this is most confusing. Aren’t all
signals real? What makes a signal complex? Let’s start with what you know. If you scream into the air, t his
is a real signal. In fact all physical signals are real. They are not complex in any sense. What do we mean by
complex? Any signal you can create is as real as a wire you string between two points. The act of
complexification comes when we take these signals into signal processing and do any kind of separation of
the signal into components based on some predefined basis functions. Complexness of signals is a
mathematical property that is exploited in baseband and low frequency processing.
Communications signal processing is mostly a two dimensional process. We take a physical signal and map
it into a preset signal space. The best way to explain is to resort to the x and y axis projections of a line. A
line is real, no matter at what angle, it is real. From the myriad of things we can do to a line, one is to
compute its projections into a Cartesian space. The xaxis and the yaxis projections of a line are its
complex description. This allows us to compare lines of all sorts on a common basis. Similarly most analog
and natural signals are real but can be mapped into real and complex projections which are essentially like
the xy projections of a line.
The x projection of a signal is called its I – inphase quadrature component and y projection is called i ts Q –
out of phase quadrature projection. The word quadrature means perpendicular. So I and Q projections are
perpendicular in mathematical sense.
Chapter One – Fundamentals of Signals Page 21 www.complextoreal.com Unlike the projection of a line, which is also a line, we also refer to signal projections as vectors. Here a
vector is not a line but a set of ordered points, such as sampled points of a signal. These ordered sets are
called vectors in signal processing and inherit many of the same properties as two dimensional vectors. Just
as two plane vectors can be orthogonal to each other, similarly vectors consisting of sets of points can also
be orthogonal.
The following properties apply both the twodimensional and multidimensional vectors consisting of
sampled data.
If we define a vector z, then it can be resolved into two orthogonal projections x and y. The operator j is
the quadrature operator and means that y is orthogonal to x. z x j y
The amplitude of the signal is given by z x y
2 2 (1.32) Complex conjugate – A complex conjugate of a signal, also referred to as a vector, is a 90 deg rotated
version as shown in Fig. 21. The complex conjugate is by definition orthogonal to the original vector. It is
written by changing the sign of either the real or the imaginary component. Changing the sign of the
imaginary component rotates the number by 90 clockwise and changing the sign of the real component by
90 in the counter clockwise direction. This is essential same as multiplying the number by j , which
should be seen as a 90 degree rotation. Z y
x
y the complex
conjugate of
Z. Fig. 1.32 – Vector Z and its complex conjugate form an orthogonal pair.
If we define vector z by its quadrature components as such z x j y
The complex conjugate is written as z x j y (1.33) The multiplication of a complex conjugate with the signal give us ( x jy )( x jy ) x 2 y 2 (1.34) This is equal to the power of the signal and this is a scalar quantity.
Complex number math is used most often at baseband and in modulation. In modulation, quadrature
representation also includes a frequency shift so the word quadrature representation also some times is
meant as modulation of which more is convered in the modulation chap ter. Chapter One – Fundamentals of Signals Page 22 www.complextoreal.com For mistakes etc, please contact me at [email protected]
Charan Langton
Complextoreal.com
Copyright Oct 2008, All Rights Reserved Chapter One – Fundamentals of Signals Page 23 www.complextoreal.com ...
View
Full
Document
This note was uploaded on 02/07/2011 for the course EEE EE567 taught by Professor Tutorials during the Spring '11 term at Birla Institute of Technology & Science, Pilani  Hyderabad.
 Spring '11
 Tutorials

Click to edit the document details