{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

ProbIntro2 - Insight by Mathematics and Intuition for...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Insight by Mathematics and Intuition for understanding Pattern Recognition Waleed A. Yousef Faculty of Computers and Information, Helwan University. February 27, 2010
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Basics of Statistics: A random variable (or vector) is denoted by upper case letters, e.g., X . Independent observations (realizations) of this r.v. are called independent and identically distributed (i.i.d.), e.g., x 1 , . . . , x n . Estimator is a real-valued function that tries to be “close” in some sense to a population quantity. How “close”? Define a loss function, e.g., the Mean Square Error (MSE): L ( μ, μ ) = ( μ - μ ) 2 . And, define the Risk to be the Expected loss: E ( μ - μ ) 2 . Important Decomposition for any estimator μ : E ( μ - μ ) 2 = E (( μ - E μ ) + (E μ - μ )) 2 = E ( μ - E μ ) 2 + E (E μ - μ ) 2 + 2 E [( μ - E μ ) (E μ - μ )] = Var μ + Bias 2 ( μ )
Background image of page 2
Estimation of μ X Sample mean X as an estimator of μ X : μ X = 1 n n i =1 x i . E X = E 1 n n i =1 x i = E X (= μ ) Bias ( μ ) = E μ - μ = 0 Var μ = 1 n 2 i σ 2 + i j Cov ( X i , X j ) = 1 n σ 2 This means that from sample to sample it will vary with this variance. An estimator with zero bias is called “unbiased”. This means that on average it will be exactly as what we want.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Estimation of σ 2 σ 2 = E ( X - μ ) 2 = E X 2 - μ 2 σ 2 = 1 n - 1 i ( x i - X ) 2 = 1 n - 1 i x 2 i - n X 2 E σ 2 = E 1 n - 1 i x 2 i - n X 2 = 1 n - 1 n E X 2 - n E X 2 = 1 n - 1 n σ 2 + μ 2 - n σ 2 n + μ 2 = σ 2 , therefore, σ 2 is unbiased for σ 2 .
Background image of page 4
Estimation of Cov ( X, Y ) Cov ( X, Y ) = E ( X - μ X ) ( Y - μ Y ) = E XY - μ X
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}