lec1 - Lecture 1 Estimation theory. 1.1 Introduction Let us...

This preview shows pages 1–2. Sign up to view the full content.

Lecture 1 Estimation theory. 1.1 Introduction Let us consider a set X (probability space) which is the set of possible values that some random variables (random object) may take. Usually X will be a subset of , for example { 0 , 1 } , [0 , 1], [0 , ), , etc. I. Parametric Statistics. We will start by considering a family of distributions on X : • { θ , θ Θ } , indexed by parameter θ . Here, Θ is a set of possible parameters and probability θ describes chances of observing values from subset of X, i.e. for A X , θ ( A ) is a probability to observe a value from A . Typical ways to describe a distribution: probability density function (p.d.f.), probability function (p.f.), cumulative distribution function (c.d.f.). For example, if we denote by N ( α, σ 2 ) a normal distribution with mean α and variance σ 2 , then θ = ( α, σ 2 ) is a parameter for this family and Θ = × [0 , ). Next we will assume that we are given

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.

Page1 / 2

lec1 - Lecture 1 Estimation theory. 1.1 Introduction Let us...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online