Introduction to ACL1
OBJECTIVE
The objective of this introduction is for students to be able to gain access to and perform some
basic functions using the ACL software. This assignment is written as a tutorial to help guide
your initial experience using th
3.3 Continuous Distribution
3.3.1 Uniform Distribution
The continuous uniform distribution is dened by spreading mass uniformly over an interval [a, b]. Its pdf given by f (x|a, b) =
b a
1 ba
if x [a, b] otherwise
0
It is easy to check that
f (x)dx = 1
3.3.3 Normal Distribution
The normal distribution has several advantages over the other distributions. a. The normal distribution and distributions associated with it are very tractable and analytically. b. The normal distribution has the familiar bell sh
3.3.4 Beta Distribution
The beta(, ) pdf is
f (x|, ) =
1
x1 (1 x) 1 ,
B (, )
0 < x < 1,
> 0,
> 0,
where B (, ) denotes the beta function,
1
B (, ) =
x1 (1 x) 1 dx =
0
()( )
.
( + )
For n > , we have
1
1
xn x1 (1 x) 1 dx
B (, ) 0
B ( + n, )
( + n)( + )
=
3.4 Exponential Families
A family of pdfs or pmfs is called an exponential family if it can be expressed as
k
f (x| ) = h(x)c( ) exp
i=1
wi ( )ti (x) .
(1)
Here h(x) 0 and t1 (x), . . . , tk (x) are real-valued functions of the observation x (they cannot
3.6 Inequalities and Identities
Theorem 3.6.1 (Chebychevs Inequality) Let X be a random variable and let g (x) be a nonnegative function. Then, for any r > 0, P (g (X ) r) Eg (X ) . r
Proof:
Eg (X ) =
g (x)fX (x)dx g (x)fX (x)dx (g is nonnegative)
cfw_x:g
3
Bivariate Transformations
Let (X, Y ) be a bivariate random vector with a known probability distribution. Let U = g 1 (X, Y )
and V = g2 (X, Y ), where g1 (x, y ) and g2 (x, y ) are some specied functions. If B is any subset of
R2 , then (U, V ) B if an
4
Hierarchical Models and Mixture Distributions
Example 4.1 (Binomial-Poisson hierarchy) Perhaps the most classic hierarchical model is the following. An insect lays a large number of eggs, each surviving with probability p. On the average, how many eggs
4. Multiple Random Variables
4.1 Joint and Marginal Distributions
Denition 4.1.1 An n-dimensional random vector is a function from a sample space S into Rn , n-dimensional Euclidean space. Suppose, for example, that with each point in a sample space we as
4.2 Conditional Distributions and Independence
Denition 4.2.1 Let (X, Y ) be a discrete bivariate random vector with joint pmf f (x, y ) and marginal pmfs fX (x) and fY (y ). For any x such that P (X = x) = fX (x) > 0, the conditional pmf of Y given that
4.5 Covariance and Correlation
In earlier sections, we have discussed the absence or presence of a relationship between two random variables, Independence or nonindependence. But if there is a relationship, the relationship may be strong or weak. In this
Multivariate Distribution
The random vector X = (X1 , . . . , Xn ) has a sample space that is a subset of Rn . If X is discrete random vector, then the joint pmf of x is the function dened by f (x) = f (x1 , . . . , xn ) = P (X1 = x1 , . . . , Xn xn ) for
5.1 Basic Concepts of Random Samples
Denition 5.1.1
The random variables X1 , . . . , Xn are called a random sample of size n from the population
f (x) if X1 , . . . , Xn are mutually independent random variables and the marginal pdf or
pmf of each Xi is
5.3.1 Properties of the sample mean and variance
Lemma 5.3.2 (Facts about chi-squared random variables) We use the notation 2 to denote a chi-squared random variable with p degrees of freedom. p (a) If Z is a N (0, 1) random variable, then Z 2 2 ; that is
5.3.2 The Derived Distributions: Students t and Snedecors F
Denition Let X1 , . . . , Xn be a random sample from a N (, 2 ) distribution. The quantity (X )/(S/ n) has Students t distribution with n 1 degrees of freedom. Equivalently, a random variable T h
Sums of Random Variables from a Random Sample
Denition 5.2.1
Let X1 , . . . , Xn be a random sample of size n from a population and let T (x1 , . . . , xn ) be a
real-valued or vector-valued function whose domain includes the sample space of (X1 , . . . ,
1
Order Statistics
Definition The order statistics of a random sample X1 , . . . , Xn are the sample values placed in ascending order. They are denoted by X(1) , . . . , X(n) . The order statistics are random variables that satisfy X(1) X(2) X(n) . The fo
5.5 Convergence Concepts
This section treats the somewhat fanciful idea of allowing the sample size to approach innity
and investigates the behavior of certain sample quantities as this happens. We are mainly
concerned with three types of convergence, and
3.2.3 Binomial Distribution
The binomial distribution is based on the idea of a Bernoulli trial. A Bernoulli trail is
an experiment with two, and only two, possible outcomes. A random variable X has a
Bernoulli(p) distribution if
X=
1 with probability p
0
3.2.4 Poisson Distribution
Denition Let X be the number of events per basic unit: For example, Number of rain drops in one minute. Number of cars passing by you for an hour. Number of chocolate particles in one ChoCoChip cookie. Number of typos in one pag
Advanced Statistical Inference I
Homework 1: Probability Theory
Due Date: October 7th
1. (Detect mixture distribution) Exercise 1.6.
2. (Countable additivity and Kolmogorovs Axiom) Exercise 1.12 and Exercise 1.35.
3. (Information and Conditioning) Exercis
Lecture 1: Set Theory
1
Set Theory
One of the main objectives of a statistician is to draw conclusions about a population of objects by
conducting an experiment. The st step in this endeavor is to identify the possible outcomes or, in
statistical terminol
1.2.3 Counting and Equally Likely Outcomes
Methods of counting are often used in order to construct probability assignments on nite
sample spaces, although they can be used to answer other questions also. The following
theorem is sometimes known as the Fu
1.3 Conditional Probability and Independence
All of the probabilities that we have dealt with thus far have been unconditional probabilities.
A sample space was dened and all probabilities were calculated with respect to that sample
space. In many instanc
1.6. Density and Mass Functions
Denition 1.6.1 (Probability Mass Function) The probability mass function (pmf) of a discrete random variable X is given by fX (x) = P (X = x) for all x.
Example 1.6.2 (Geometric probabilities) For the geometric distribution
Lecture 2 : Basics of Probability Theory
When an experiment is performed, the realization of the experiment is an outcome in the sample
space. If the experiment is performed a number of times, dierent outcomes may occur each time
or some outcomes may repe