STAT 428 Spring 2010
Homework #4 Feb 26
Homework 4
Due in class on Friday, Mar 5. 1. The following table is the rst 6 rows and rst 6 columns of the 12 12 table recording the month of birth and death for 82 descendants of Queen Victoria (we went over this
Homework 5 solution 1 Problem 1
For this problem, what we need to remember is how to update the weight for each chain we generated. The formula is wi,0 = 1, wi,T = g (YT |T )wi,T 1 , where i denotes the i-th chain or sample, T = 1, 2, ., 10 denotes the T-
STAT 428: Homework 1
Qianyu Cheng
January 25, 2016
3.2 (Similar example can be found in RVgeneration note P6.) "#$ () has the same
distribution as X. To generate a random observation X, first generate a Uniform(0,1) variate
u and deliver the inverse
Homework 3 solution 1 Problem 1
We can just use the algorithm described in HW2: Step1: Generate x uniform(0,1) distribution. Step2: Generate u from uniform(0,1). Step3: Accept x if u < R code: x<-NULL num=0 while(num<100) cfw_ y<-runif(1) u<-runif(1) if(u
Homework 4 solution 1
a) For this problem, the R code basically follows the lecture notes. In my code, I extended the table with the row sum as the last column and column sum as the last row, so now the table is as follows.
Problem 1
Table 1: New table Co
STAT 428 Spring 2010
Homework 3 Feb 17
Homework 3
Due in class on Friday, Feb 26. 1. Implement your rejection sampling algorithm in R for Problem 4(b) of Homework 2 to generate 100 samples from the truncated Weibull distribution (x) f (x)1cfw_0<x<1 . Base
STAT 428 Spring 2010
Homework 2 Feb 5
Homework 2
Due in class on Friday, Feb 12. 1. The following is another version of the Box-Muller algorithm. 1. Generate Y1 and Y2 independently from exponential distribution with parameter 1 until Y2 > (1 Y1 )2 /2. 2.
Homework 2 solution 1 Problem 1
We need to show that P (X x) = (x), where the function (x) =
2 x 1 exp( x ) 2 2
is the CDF of N (0, 1).
P (X x) = P (X x, U 0.5) + P (X x, U > 0.5) = P (X x|U 0.5)P (U 0.5) + P (X x|U > 0.5)P (U > 0.5) 1 1 = P (X x|U 0.5) +
STAT 428 Spring 2010
Homework #1 Jan 27
Homework 1
Due in class on Wednesday, Feb 3. 1. Use the naive Monte Carlo method to estimate E (X 6 ), where X has a normal distribution with mean 3 and variance 4. Describe your algorithm and implement it in R. Giv
Homework 6 solution 1 Problem 1
T
Since p-value=
1p(T )p(T0 ) p(T ) = E (1p(T )p(T0 ) ), we can sample p(T ) by Metropolis-
Hastings algorithm, and use the sample mean of those indicator functions to approximate the theoretical mean. In order to estimate
STAT 428 Spring 2010
Homework #6 Apr 7
Homework 6
Due in class on Friday, Apr 16. 1. (Diaconis and Sturmfels, 1985) The following table shows data gathered to test the hypothesis of association between birthday and deathday. The table records the month of
Monitoring Convergence
One challenge of MCMC is knowing when the
chain has converged to its target distribution.
There is much research on this topic, and we
will discuss a popular method by Gelman and
Rubin.
Let be a summary statistic that estimates
some
5.1
5.2
5.3
5.4
5.5
An Introduction to Classical Decision Theory
Monte Carlo Integration
Standard Error of
Confidence Interval for
Comparing estimators
Motivating example for Statistical Computing
Importance Sampling
Stratified Importance Sampling
Antit
3.1
3.2
3.3
Basic Methods: generating pseudo-random uniform numbers
Desires for a Uniform Pseudo-Random Generator
History
Early Method: von Neumanns Midsquare Method
Linear Congruential and Shift Register Generators
Congruential algorithm:
Shift Register
The Bootstrap and Jackknife
The Empirical Distribution
Do an experiment n times and observe n values x1, x2, ., xn of a random variable X F ,
where we understand F to be the CDF of the
population.
For simplicity in most of the discussion that follows it
w
Bayesian Statistics and MCMC
Chapter 9 concerns Markov Chain Monte Carlo
(MCMC), which is quite general in some ways
but is primarily applied in Bayesian data analysis.
Before we begin with the material of Chapter
9, well review some background on Bayesia
Well conclude the course with a couple of lectures on root-finding and optimization, and introduce some functions in R designed for these
tasks.
Let f (x) be a continuous function f : R1 R1.
A root of the equation f (x) = c is a number x
that satisfies
g(
The Bootstrap and Jackknife
The Empirical Distribution
Do an experiment n times and observe n values x1, x2, ., xn of a random variable X F ,
where we understand F to be the CDF of the
population.
For simplicity in most of the discussion that follows it
w
5.1
5.2
5.3
5.4
An Introduction to Classical Decision Theory
Monte Carlo Integration
Standard Error of
Confidence Interval for
Comparing estimators
Motivating example for Statistical Computing
Importance Sampling
Stratified Importance Sampling
5. Monte
Permutation Tests
Classical Question: Given X1, X2, .Xn IID from
Fx, and Y1, Y2, .Ym IID from Fy
Question: Do X and Y have the same distribution? ie does Fx = Fy
Two approaches in earlier classes:
1. Assume Xs and Y s both normal X
N (x, x2) and Y N (y ,
The Independence Sampler
Another special case of the MHA is the independence sampler, in which the proposal distribution is independent of the current state of
the chain. Specifically, g(Y |Xt) = g(Y ). In this
case, the probability of accepting a proposa
The Bootstrap and Jackknife
The Empirical Distribution
Do an experiment n times and observe n values x1, x2, ., xn of a random variable X F ,
where we understand F to be the CDF of the
population.
For simplicity in most of the discussion that follows it
w
Monte Carlo Inference
Let X1, X2, ., Xn be a random sample from
the distribution of X, and let be a parameter
describing some characteristic of this distribution. An estimator
= (X1, X2, ., Xn)
is a function of the sample.
Monte Carlo techniques can be u
The Metropolis-Hastings Algorithm
We have discussed some Bayesian theory and
some basic theory for Markov Chains. Now
lets consider a particular method for constructing a Markov chain to have the target distribution we wish, and later in the lecture well
\ncro chub on 1
CD UK: \1 H0 or $51368 Wok 83w
r\&
0.113
N
L63 Q
ooh/-
>
C, \
\S @\
wwwlkmc
h \D
K w to L
j I
t > E6 0 LLS-f
ixi
\a:
0x SSW-)6
\
Q0
Ox
5 wu
A
62 $4me 083 .9
: . x 1'
$031ch QXLMS U
l \VA CID F Various Beta Distributions
lofl
Various Bet
STAT 428
Spring 2017
1. Let the p.d.f. f (x) = X 1 for > 0 and 0 < x < 1.
a. Write an R function to draw random samples from f that takes as arguments
the sample size n and the parameter .
b. Choose a value of and draw a very large sample
R x from f and p
MCMC Appl. Bayes
Markov Chain Monte Carlo and Applied Bayesian
Statistics: a short course
Chris Holmes
Professor of Biostatistics
Oxford Centre for Gene Function
1
MCMC Appl. Bayes
Objectives of Course
To introduce the Bayesian approach to statistical da
Statistics 580
Maximum Likelihood Estimation
Introduction
Let y = (y1 , y2 , . . . , yn )0 be a vector of iid, random variables from one of a family of
distributions on <n and indexed by a p-dimensional parameter = (1 , . . . , p )0 where
<p and p n . De
Monte Carlo Integration
Consider the problem of evaluating
=
Z 1
0
g(x)dx
for some function g that is integrable on the
interval (0, 1). When an analytical solution is
not available, simulation techniques can often
be used for this purpose.
Suppose that a
Random Number Generation
Simulation plays an increasingly large role in
statistics. It is central to modern Bayesian
data analysis, has long been utilized to study
properties of statistical procedures that cant
be easily derived analytically, and has nume
Permutation Tests
Consider two independent random samples
X1, X2, ., Xn and Y1, Y2, ., Ym. from distributions Fx and Fy , respectively.
Let Z be the set cfw_X1, ., Xn, Y1, ., Ym indexed
by
= cfw_1, ., n, n + 1, ., n + m.
Note that Zi = Xi is 1 i n and Zi
Pension Example
Jeff
4/4/2017
Implementation
Lets read in a the corresponding dataset from Thisted (1988), and implement the EM algorithm. Here
nobs = (3062, 587, 284, 103, 33, 4, 2)
nobs=c(3062,587,284,103,33,4,2)
children=c(0,1,2,3,4,5,6)
crossprod=sum(
Let f (x) be a continuous function f : R1 R1.
A root of the equation f (x) = c is a number x
that satisfies
g(x) = f (x) c = 0.
Well focus on a methods that do not require
taking a derivative of f (x), and one that does.
Bisection Method: Suppose f (x) is
Monte Carlo Inference
Let X1, X2, ., Xn be a random sample from
the distribution of X, and let be a parameter
describing some characteristic of this distribution. An estimator
= (X1, X2, ., Xn )
is a function of the sample.
Monte Carlo techniques can be
3/3/2015
(109) QUESTION
109
The contents of the raw data file EMPLOYEE are listed below:
-|-10-|-20-|-30
Ruth 39 11
Jose 32 22
Sue 30 33
John 40 44
The following SAS program is submitted:
data test;
in file' employee';
input employee_ name $ 1-4;
if emplo