lecture9

# lecture9 - ISYE 2028 A and B Lecture 9 Estimation and...

This preview shows pages 1–5. Sign up to view the full content.

ISYE 2028 A and B Lecture 9 Estimation and Sampling Dr. Kobi Abayomi March 25, 2009 1 Sampling When we sample we draw observations from a population with a distribution. Our samples are our observations. We often, almost always really, say that our samples are independent and identically distributed . From our samples, we generate estimates for our population parameters. In general, our estimates are statistics : quantities whose value can be calculated from sample data. Prior to obtaining data, there is uncertainty as to what value of any particular statistic will result. There, a statistic is a random variable and will be denoted by an uppercase letter; a lowercase letter is used to represent the calculated or observed value of the statistic. As a random variable, a statistic has a probability distribution - we call it the sampling distribution . The sampling distribution depends upon the population distribution (normal, uniform, etc) and the sample size, n , and on the method of sampling. We say that the rv’s (random variables) X 1 ,...,X n are a (simple) random sample of size n if: 1. The X i ’s are independent rv’s. 2. Every X i has the same probability distribution (i.e. identical ). 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1.1 An example in R X<-c(40,40,45,45,45,50,50,50,50,50) n<-10 x<-sample(X,n,replace=TRUE) Here X is our population from which we’ll draw and x is our representative sample Remember that help(sample) will give you help). This little function will allow us to get the probability of a sample from a population: jointp<-function(smple,pop) { k<-length(smple) lpop<-length(pop) p<-rep(0,k) for(i in 1:k){ h<-smple[i] t<-pop==h tts<-sum(t) p[i]<-tts/lpop } pp<-prod(p) print(pp) } We can calculate the true likelihood for this simple example using this information. print(jointp(x,X)) In general, the likelihood for observed data is the probability under the model L ( θ ) = f ( x ; θ ) and for a simple random sample, usually looks like: n Y i =1 f ( x i ; θ ) 2
Example: the likelihood of x = (0 , 1 , 0 , 1 , 0 , 0 , 1) if X Ber ( p ) is lik p 3 (1 - p ) 4 . Example: the likelihood of a random sample of an Exp ( λ ) L ( λ ) = n Y i =1 λe - λx i = λ n e - λ P n i =1 x i Often, we take examine the log likelihood lnL ( θ ) in the above example: lnL ( λ ) = nlogλ - λ n X i =1 x i Or, We can resample from this distribution and calculate estimates. Notice that the sampled values tend to the distributional values. par(mfrow=c(2,2)) for(n in c(10,100,1000,1000)) { hist(sample(X,n,replace=TRUE),freq=FALSE,ylim=c(0,.6),main="") } Here is a function to calculate the sample mean and variances for many samples. smeenvar<-function(num,ssize,pop){ xbarvec<-rep(0,num) varvec<-rep(0,num) for(i in 1:num){ smple<-sample(pop,ssize,replace=TRUE) xbarvec[i]<-mean(smple) varvec[i]<-var(smple) } cbind(xbarvec,varvec) } 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
We can look at the sample distributions of these quantities h<-smeenvar(100,50,X) hist(h[,1]) hist(h[,2]) 2 Remember the ﬁrst sampling distribution and esti-
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 11

lecture9 - ISYE 2028 A and B Lecture 9 Estimation and...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online