Lecture II
Using the example from Birenens Chapter 1: Assume we are interested in the game Texas lotto (similar to Florida lotto).
In this game, players choose a set of 6 numbers
out of the first 50. Note that the ordering does not count so that 35,20,1

Limits and the Law of Large Numbers Lecture XV
I. Almost Sure Convergence
A. B. White, Halbert. Asymptotic Theory for Econometricians (New York: Academic Press, 1984). Chapter II. Let represent the entire random sequence Zt . As discussed last time, our i

Empirical Examples of the Central Limit Theorem: Lecture XVI
I.
Back to Asymptotic Normality
A. The characteristic function of a random variable X is defined as
X t E eitX E cos tX i sin tX
E cos tX iE sin tX
Note that this definition parallels the def

Definition of Estimator and Choosing among Estimators: Lecture XVII
I. What is An Estimator?
A. In the next several lectures we will be discussing statistical estimators and estimation. The book divides this discussion into the estimation of a single numb

Definition of an Estimator and Choosing among Estimators
Lecture XVII
What is An Estimator?
The book divides this discussion into the estimation of a single number such as a mean or standard deviation or the estimation of a range such as a confidence inte

Mean Squared Error and Maximum Likelihood Lecture XVIII
I. Mean Squared Error
A. As stated in our discussion on closeness, one potential measure for the goodness of an estimator is 2 E ^ where ^ is the estimator and is the true value. In the preceding exa

Mean Squared Error and Maximum Likelihood
Lecture XVIII
Mean Squared Error
As stated in our discussion on closeness, one potential measure for the goodness of an estimator is
^ - E
2
(
)
In the preceding example, the mean square error of the estimate

Sufficient Statistics
Lecture XIX
I.
Data Reduction
A.
References:
Casella, G. and R.L. Berger Statistical Inference 2nd Edition, New York: Duxbury Press,
Chapter 6 Principles of Data Reduction. Pp 271-309.
Hogg, R.V., A. Craig, and J.W. McKean Introducti

Sufficient Statistics
Lecture XIX
Data Reduction
References:
Casella, G. and R.L. Berger Statistical Inference 2nd
Edition, New York: Duxbury Press, Chapter 6 " Principles of Data Reduction." Pp 271-309. Hogg, R.V., A. Craig, and J.W. McKean Introduction

Concentrated Likelihood Functions, Normal Equations, and Properties of Maximum Likelihood: Lecture XX
I. Concentrated Likelihood Functions A. In the last lecture I introduced the concept of maximum likelihood using a known variance normal distribution of

L ectur e XX
Concent r at ed L ikelihood
F unct ions
The mor e gener al for m of t he nor mal likelihood
f unct ion can be wr it t en as:
(
L X ,
2
)=
n
i =1
( Xi )2
1
exp
2
2
2
2
n
1
2
ln ( L ) = ln ( )
2
2
2
(X
n
i =1
i
)
2
This expr ession can be

Confidence Intervals Lecture XXI
I. Interval Estimation
A. B. As we discussed when we talked about continuous distribution functions, the probability of a specific number under a continuous distribution is zero. Thus, if we conceptualize any estimator, ei

Large Sample Theory Lecture XIV
I. Basic Sample Theory
A. The problems set up is that we want to discuss sample theory. 1. First assume that we want to make an inference, either estimation or some test, based on a sample. 2. We are interested in how well

Bivariate and Multivariate Normal Random Variables Lecture XIII
I. Bivariate Normal Random Variables
A. Definition 5.3.1. The bivariate normal density is defined by
f x, y 2 exp
X
1
Y
1 1
2 2 X X 2 Y Y
x
2
y
2 1
2
x
X
X
y
Y
Y
B.
Theorem 5.3.1. Let X , Y h

Probability Theory and Measure: Lecture III
I. Uniform Probability Measure:
A. I think that Bieren's discussion of the uniform probability measure provides a firm basis for the concept of probability measure. 1. First, we follow the conceptual discussion

L ecture III
Uniform Probability
Measure
I think that Bieren s discussion of the uniform probability
measure provides a firm basis for the concept of
probability measure.
First, we follow the conceptual discussion of placing ten
balls numbered 0 through 9

Random Variables and Probability Distributions: Lecture IV
I.
Conditional Probability and Independence
A. In order to define the concept of a conditional probability it is necessary to discuss joint probabilities and marginal probabilities. 1. A joint pro

Distribution Functions for Random Variables: Lecture VI
I.
Bivariate Continuous Random Variables
A. Definition 3.4.1. If there is a nonnegative function f x, y defined over the whole plane such that
Px1 X x2 , y1 Y y 2
y2 y1
f x, y dx dy
x2 x1
for any x

Derivation of the Normal Distribution: Lecture VI
I. Derivation of the Normal Distribution Function
A. The order of proof of the normal distribution function is to start with the standard normal: 1 x2 2 f x e 2 1. First, we need to demonstrate that the di

An Applied Sabbatical: Lecture VII
I. Basic Crop Insurance
A. Nelson, Carl H. "The Influence of Distributional Assumptions on the Calculation of Crop Insurance Premia." North Central Journal of Agricultural Economics 12(1)(Jan 1990): 718.
1. 2. 3. 4. In t

Mean and Higher Moments
Lecture VII
I.
Expected Value
A.
Definition 4.1.1. Let X be a discrete random variable taking the value xi with
probability P xi , i 1, 2, . Then the expected value (expectation or mean) of
X , denoted E X , is defined to be E X
ab

Moments of More than One Random Variable Lecture IX
I. Covariance and Correlation
A. Definition 4.3.1:
Cov X , Y
E
X
E X XE Y
Y
E Y E X E Y E X E Y
E XY E XY E XY
E X Y
E X E Y E X E Y
E X E Y
1.
Note that this is simply a generalization of the standard v

Moment Generating Functions Lecture X
I. Moment Generating Functions
A. Definition 2.3.3. Let X be a random variable with cumulative distribution function F X . The moment generating function (mgf) of
X (or F X ), denoted M X t , is
MX t
E etX
provided th

Binomial Random and Normal Random Variables: Lecture XI
I. Bernoulli Random Variables
A. The Bernoulli distribution characterizes the coin toss. Specifically, there are two events X 0,1 with X 1 occurring with probability p . The probability distribution

Normal Random Variables Lecture XII
I. Univariate Normal Distribution.
A. Definition 5.2.1. The normal density is given by
1 x 2 1 f x exp x , 0 2 2 2
When X has the above density, we write symbolically X ~ N , 2 . B. Theorem 5.2.1. Let X be N , 2 . The

Confidence Intervals
Lecture XXI
Interval Estimation
As we discussed when we talked about continuous distribution functions, the probability of a specific number under a continuous distribution is zero. Thus, if we conceptualize any estimator, either a n

Bayesian Estimation and Confidence Intervals Lecture XXII
I. Bayesian Estimation
A. Implicitly in our previous discussions about estimation, we adopted a classical viewpoint. 1. We had some process generating random observations. 2. This random process wa

Bayesian Estimation and Confidence Intervals
Lecture XXII
Bayesian Estimation
Implicitly in our previous discussions about estimation, we adopted a classical viewpoint.
We had some process generating random observations. This random process was a functi

LectureXVI
ThecharacteristicfunctionofarandomvariableX
isdefinedas
X ( t ) = E eitX = E cos ( tX ) + i sin ( tX )
= E cos ( tX ) + iE sin ( tX )
Notethatthisdefinitionparallelsthedefinitionof
themomentgeneratingfunction
M X ( t ) = E e
tX
Likethemomen

Statistics for Food and Resource Economics
Final 1999
Attached is a table of corn yields and nitrogen, phosphorous and potash
applied for acres of production in Illinois.
1. Compute the ordinary least squares estimates for the linear production
function o