This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 Expectations and its Properties The mean of a random variable, X, is its mathematical expectation or expected value, i.e., it is the average value of the random variable in an infinite number of repetitions or repeated samples and is denoted E[X] or μ X . If X is a discrete r.v that can take the values x 1 ,x 2 ,...,x n which have probability mass values f ( x 1 ) ,f ( x 2 ) ,...,f ( x n ) respectively, then the expected value of X is E [ X ] = x 1 f ( x 1 ) + x 2 f ( x 2 ) + ··· + x n f ( x n ) = n X i =1 x i f ( x i ) = μ X That is, for a discrete r.v, its mathematical expectation or mean value is a weighted average of the possible outcomes of the r.v with the weights being the probabilities of each outcome. Ex) Assume a r.v (specifically, a bernoulli trial)with probability mass function as f ( x ) = p x (1- p ) 1- x , for x = 0 , 1. So E [ X ] = n X i =1 x i f ( x i ) = 0 f (0) + 1 f (1) = 0(1- p ) + 1( p ) = p For continuous r.v’s, say, a uniform r.v f ( y ), integration would apply. For example, suppose that the distribution parameters are θ and θ 1 and ( θ < θ 1 ), which has the probability density function f ( y ) = 1 / ( θ 1- θ ) for θ ≤ u ≤ θ 1 . Then the expected value of f ( y ) i.e., E [ f ( y )] = ( θ + θ 1 ) / 2, which can be found by integration. Also, just as a probability mass function for the discrete r.v has probabilities that sum to one, the area under a probability density function for continuous r.v must equal to one, i.e., R ∞-∞ yf ( y ) dy = 1....
View Full Document
- Spring '10
- Variance, Probability theory, probability density function, 1 j, pij