1. Statistical Models, Parameter Space and Identifiabiity (Jan3)

1. Statistical Models, Parameter Space and Identifiabiity (Jan3)

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Statistics 3858 : Statistical Models, Parameter Space and Identifiability In an experiment or observational study we have data X 1 ,X 2 ,...,X n . These we view as a observations of random variables with some joint distribution. Definition 1 A statistical model is a family of distributions F such that for any possible n , a given distribution f F gives a joint distribution of X 1 ,X 2 ,...,X n . Note that f above may be either a joint pdf, pmf or cdf. Every f F must specify the (joint) distribution of X i , i 1 = ,...,n . Sometimes we use a subscript n , that is f n , to indicate the dependence on the sample size n . For a given sample size n , let f n be the joint pdf of the random variables X i ,i = 1 , 2 ,...,n . Suppose the X i s are iid with marginal pdf f . Then the joint pdf is of the form f n ( x 1 ,x 2 ,...,x n ) = n i =1 f ( x i ) . (1) There is of course the analogue for iid discrete r.v.s. Notice also in the iid case the statistical model can also be viewed or described by the simpler one dimensional marginal distribution. In this case we can simplify the description of the family F to the corresponding family of marginal distributions f . For example if X i s are iid normal, then the marginal distributions belong to { f ( ; ) : = ( , 2 ) , R, 2 R + } . In many dependent random variables cases we can also obtain their joint distribution. For example consider the so called autoregressive order one process, AR(1). It is defined iteratively as X i +1 = X i + i +1 (2) Specifically suppose that the r.v.s i are iid N (0 , 2 ) and independent of the random variables up to time index less that i . Let f be the N (0 , 2 ) pdf. Then the conditional distribution of X 1 given that X = x is f X 1 | X = x ( x ) = f ( x- x ) Similarly we have the conditional distribution of X t +1 given X = x ,X 1 = x 1 ,...,X t = x t , which by the Markov property is the same as the conditional distribution of X t +1 given X t = x t , given by f X t +1 | X t = x t ( x ) = f ( x- x t ) 1 This then gives the joint conditional pdf f n of X 1 ,X 2 ,...,X n conditioned on X = x as f n ( x 1 ,x 2 ,...,x n ) = n i =1 f ( x i- x i- 1 ) In this case the joint distribution for any n is equivalent to knowing the initial condition x , the parameter and the (marginal) distribution f of the random innovations . For example one may then talk about a normal autoregressive model with initial condition x as a short hand for the statistical model (2) where t are iid N (0 , 2 ). Notice there are two additional)....
View Full Document

This note was uploaded on 01/17/2012 for the course AM 1234 taught by Professor Qqqq during the Spring '11 term at UWO.

Page1 / 6

1. Statistical Models, Parameter Space and Identifiabiity (Jan3)

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online