The interarrival process gives us an alternate definition of a Bernoulli process:
Start with an IID
Geom(
p
)
process
T
1
, T
2
, . . .
. Record the arrival of an event at
time
T
1
,
T
1
+
T
2
,
T
1
+
T
2
+
T
3
,. . .
EE 178: Random Processes
Page 7 – 14
•
Arrival time process: The sequence of r.v.s
Y
1
, Y
2
, . . .
is denoted by the arrival
time process. From its relationship to the interarrival time process
Y
1
=
T
1
,
Y
k
=
∑
k
i
=1
T
i
, we can easily find the mean and variance of
Y
k
for any
k
E(
Y
k
) = E
parenleftBigg
k
summationdisplay
i
=1
T
i
parenrightBigg
=
k
summationdisplay
i
=1
E(
T
i
) =
k
×
1
p
Var(
Y
k
) = Var
parenleftBigg
k
summationdisplay
i
=1
T
i
parenrightBigg
=
k
summationdisplay
i
=1
Var(
T
i
) =
k
×
1
−
p
p
2
Note that,
Y
1
, Y
2
, . . .
is
not
an IID process
It is also not difficult to show that the pmf of
Y
k
is
p
Y
k
(
n
) =
parenleftbigg
n
−
1
k
−
1
parenrightbigg
p
k
(1
−
p
)
n
−
k
for
n
=
k, k
+ 1
, k
+ 2
, . . . ,
which is called the Pascal pmf of order
k
EE 178: Random Processes
Page 7 – 15
•
Example: In each minute of a basketball game, Alicia commits a foul
independently with probability
p
and no foul with probability
1
−
p
. She stops
playing if she commits her sixth foul or plays a total of 30 minutes. What is the
pmf of of Alicia’s playing time?
Solution: We model the foul events as a Bernoulli process with parameter
p
Let
Z
be the time Alicia plays. Then
Z
= min
{
Y
6
,
30
}
The pmf of
Y
6
is
p
Y
6
(
n
) =
parenleftbigg
n
−
1
5
parenrightbigg
p
6
(1
−
p
)
n
−
6
, n
= 6
,
7
, . . .
Thus the pmf of
Z
is
p
Z
(
z
) =
(
z
−
1
5
)
p
6
(1
−
p
)
z
−
6
,
for
z
= 6
,
7
, . . . ,
29
1
−
∑
29
z
=6
p
Z
(
z
)
,
for
z
= 30
0
,
otherwise
EE 178: Random Processes
Page 7 – 16
Markov Processes
•
A discretetime random process
X
0
, X
1
, X
2
, . . .
, where the
X
n
s are
discretevalued r.v.s, is said to be a
Markov process
if for all
n
≥
0
and all
(
x
0
, x
1
, x
2
, . . . , x
n
, x
n
+1
)
P
{
X
n
+1
=
x
n
+1

X
n
=
x
n
, . . . , X
0
=
x
0
}
= P
{
X
n
+1
=
x
n
+1

X
n
=
x
n
}
,
i.e., the
past
,
X
n
−
1
, . . . , X
0
, and the
future
,
X
n
+1
, are conditionally
independent given the
present
X
n
•
A similar definition for continuousvalued Markov processes can be provided in
terms of pdfs
•
Examples:
◦
Any IID process is Markov
◦
The Binomial counting process is Markov
EE 178: Random Processes
Page 7 – 17
Markov Chains
•
A discretetime Markov process
X
0
, X
1
, X
2
, . . .
is called a
Markov chain
if
◦
For all
n
≥
0
,
X
n
∈ S
, where
S
is a finite set called the
state space
.
We often assume that
S ∈ {
1
,
2
, . . . , m
}
◦
For
n
≥
0
and
i, j
∈ S
P
{
X
n
+1
=
j

X
n
=
i
}
=
p
ij
,
independent of
n
So, a Markov chain is specified by a
transition probability matrix
P
=
p
11
p
12
· · ·
p
1
m
p
21
p
22
· · ·
p
2
m
.
.
.
.
.
.
.
.
.
.
.
.
p
m
1
p
m
2
· · ·
p
mm
Clearly
∑
m
j
=1
p
ij
= 1
,
for all
i
, i.e., the sum of any row is 1
EE 178: Random Processes
Page 7 – 18
•
By the Markov property, for all
n
≥
0
and all states
P
{
X
n
+1
=
j

X
n
=
i, X
n
−
1
=
i
n
−
1
, . . . , X
0
=
i
0
}
= P
{
X
n
+1
=
j

X
n
=
i
}
=
p
ij
•
Markov chains arise in
many
real world applications:
◦
Computer networks
◦
Computer system reliability
◦
Machine learning
◦
Pattern recognition
◦
Physics
◦
Biology
◦
Economics
◦
Linguistics
EE 178: Random Processes
You've reached the end of your free preview.
Want to read all 181 pages?
 Spring '08
 Hewlett
 Probability, Signal Processing, Probability theory, Prof. Gray EE