STAT 428 Spring 2010
Homework #4 Feb 26
Homework 4
Due in class on Friday, Mar 5. 1. The following table is the rst 6 rows and rst 6 columns of the 12 12 table recording the month of birth and death for 82 descendants of Queen Victoria (we went over this
Homework 5 solution 1 Problem 1
For this problem, what we need to remember is how to update the weight for each chain we generated. The formula is wi,0 = 1, wi,T = g (YT |T )wi,T 1 , where i denotes the i-th chain or sample, T = 1, 2, ., 10 denotes the T-
STAT 428 Spring 2010
Homework 3 Feb 17
Homework 3
Due in class on Friday, Feb 26. 1. Implement your rejection sampling algorithm in R for Problem 4(b) of Homework 2 to generate 100 samples from the truncated Weibull distribution (x) f (x)1cfw_0<x<1 . Base
Homework 3 solution 1 Problem 1
We can just use the algorithm described in HW2: Step1: Generate x uniform(0,1) distribution. Step2: Generate u from uniform(0,1). Step3: Accept x if u < R code: x<-NULL num=0 while(num<100) cfw_ y<-runif(1) u<-runif(1) if(u
STAT 428: Homework 1
Qianyu Cheng
January 25, 2016
3.2 (Similar example can be found in RVgeneration note P6.) "#$ () has the same
distribution as X. To generate a random observation X, first generate a Uniform(0,1) variate
u and deliver the inverse
Homework 4 solution 1
a) For this problem, the R code basically follows the lecture notes. In my code, I extended the table with the row sum as the last column and column sum as the last row, so now the table is as follows.
Problem 1
Table 1: New table Co
Homework 6 solution 1 Problem 1
T
Since p-value=
1p(T )p(T0 ) p(T ) = E (1p(T )p(T0 ) ), we can sample p(T ) by Metropolis-
Hastings algorithm, and use the sample mean of those indicator functions to approximate the theoretical mean. In order to estimate
STAT 428 Spring 2010
Homework #1 Jan 27
Homework 1
Due in class on Wednesday, Feb 3. 1. Use the naive Monte Carlo method to estimate E (X 6 ), where X has a normal distribution with mean 3 and variance 4. Describe your algorithm and implement it in R. Giv
STAT 428 Spring 2010
Homework 2 Feb 5
Homework 2
Due in class on Friday, Feb 12. 1. The following is another version of the Box-Muller algorithm. 1. Generate Y1 and Y2 independently from exponential distribution with parameter 1 until Y2 > (1 Y1 )2 /2. 2.
Homework 2 solution 1 Problem 1
We need to show that P (X x) = (x), where the function (x) =
2 x 1 exp( x ) 2 2
is the CDF of N (0, 1).
P (X x) = P (X x, U 0.5) + P (X x, U > 0.5) = P (X x|U 0.5)P (U 0.5) + P (X x|U > 0.5)P (U > 0.5) 1 1 = P (X x|U 0.5) +
STAT 428 Spring 2010
Homework #6 Apr 7
Homework 6
Due in class on Friday, Apr 16. 1. (Diaconis and Sturmfels, 1985) The following table shows data gathered to test the hypothesis of association between birthday and deathday. The table records the month of
Permutation Tests
Consider two independent random samples
X1, X2, ., Xn and Y1, Y2, ., Ym. from distributions Fx and Fy , respectively.
Let Z be the set cfw_X1, ., Xn, Y1, ., Ym indexed
by
= cfw_1, ., n, n + 1, ., n + m.
Note that Zi = Xi is 1 i n and Zi
Random Number Generation
Simulation plays an increasingly large role in
statistics. It is central to modern Bayesian
data analysis, has long been utilized to study
properties of statistical procedures that cant
be easily derived analytically, and has nume
Pension Example
Jeff
4/4/2017
Implementation
Lets read in a the corresponding dataset from Thisted (1988), and implement the EM algorithm. Here
nobs = (3062, 587, 284, 103, 33, 4, 2)
nobs=c(3062,587,284,103,33,4,2)
children=c(0,1,2,3,4,5,6)
crossprod=sum(
Monte Carlo Integration
Consider the problem of evaluating
=
Z 1
0
g(x)dx
for some function g that is integrable on the
interval (0, 1). When an analytical solution is
not available, simulation techniques can often
be used for this purpose.
Suppose that a
Run test for randomness
Jeff Douglas
2/16/2017
Run Tests
Lets look at the run test for randomness as an example of letting the computer generate an exact
sampling distributions for a test statistics.
Suppose we wish to investigate the phenomenon of streak
ParametricBootstrap
Jeff
4/6/2017
Parametric versus Nonparametric Bootstrap
Recall from ourdiscussion of bootstrapping that we wished to learn ample some aspect of the sampling
distribution of a statistic T (x1 , x2 , ., xn ). The idea was that x1 , ., xn
\ncro chub on 1
CD UK: \1 H0 or $51368 Wok 83w
r\&
0.113
N
L63 Q
ooh/-
>
C, \
\S @\
wwwlkmc
h \D
K w to L
j I
t > E6 0 LLS-f
ixi
\a:
0x SSW-)6
\
Q0
Ox
5 wu
A
62 $4me 083 .9
: . x 1'
$031ch QXLMS U
l \VA CID F Various Beta Distributions
lofl
Various Bet
EM Algorithm
Jeff
3/29/2017
Intro to EM Algorithm
Fitting parametric statistical models when some data are
missing is very common. Some important examples are mixture
models and latent variable models.
It can often be the case that maximizing the likeliho
STAT 428 Statistical Computing
Homework 1 Solutions
Department of Statistics, University of Illinois at Urbana-Champaign
January 22, 2017
Ex.1
(a)
Since we have the pdf of X: X 1 for > 0 and 0 < x < 1, we can derive the cdf of x:
Z
F (x) =
x
f (t)dt = x (
Let f (x) be a continuous function f : R1 R1.
A root of the equation f (x) = c is a number x
that satisfies
g(x) = f (x) c = 0.
Well focus on a methods that do not require
taking a derivative of f (x), and one that does.
Bisection Method: Suppose f (x) is
Monte Carlo Inference
Let X1, X2, ., Xn be a random sample from
the distribution of X, and let be a parameter
describing some characteristic of this distribution. An estimator
= (X1, X2, ., Xn )
is a function of the sample.
Monte Carlo techniques can be
STAT 428
Spring 2017
1. Let the p.d.f. f (x) = X 1 for > 0 and 0 < x < 1.
a. Write an R function to draw random samples from f that takes as arguments
the sample size n and the parameter .
b. Choose a value of and draw a very large sample
R x from f and p
Stat428 exam review
Monte carlo inference
Can be thought of as monte carlo integration to statistics
mse
M different replication of theta
Trimmed mean
Trim lower k and upper k after ordering the data.
Not straightforward calculating variance because Xi de
5.1
5.2
5.3
5.4
5.5
An Introduction to Classical Decision Theory
Monte Carlo Integration
Standard Error of
Confidence Interval for
Comparing estimators
Motivating example for Statistical Computing
Importance Sampling
Stratified Importance Sampling
Antit
3.1
3.2
3.3
Basic Methods: generating pseudo-random uniform numbers
Desires for a Uniform Pseudo-Random Generator
History
Early Method: von Neumanns Midsquare Method
Linear Congruential and Shift Register Generators
Congruential algorithm:
Shift Register
/
SAS Base Programming Exam
Accessing Data
Use FORMATTED, LIST and COLUMN input to read raw data files
Use INFILE statement options to control processing when reading raw data
files
Use various components of an INPUT statement to process raw data files
in
SAS Institute A00-240
SAS Statistical Business Analysis SAS9: Regression
and Model
Version: 4.0
SAS Institute A00-240 Exam
QUESTION NO: 1
Refer to the ROC curve:
As you move along the curve, what changes?
A. The priors in the population
B. The true negati
3/3/2015
(109) QUESTION
109
The contents of the raw data file EMPLOYEE are listed below:
-|-10-|-20-|-30
Ruth 39 11
Jose 32 22
Sue 30 33
John 40 44
The following SAS program is submitted:
data test;
in file' employee';
input employee_ name $ 1-4;
if emplo
3.1
3.2
3.3
Basic Methods: generating pseudo-random uniform numbers
Desires for a Uniform Pseudo-Random Generator
History
Early Method: von Neumanns Midsquare Method
Linear Congruential and Shift Register Generators
Congruential algorithm:
Shift Register
-title: 'STAT 428: Homework 2: <br> Random Number Generation'
author: "WRITE YOUR LAST NAME, FIRST NAME, netid(not UIN) HERE <br> Collaborated
with: WRITE LAST NAME, FIRST NAME, Netid(not UIN) of any collaboraters"
date: "Due Week 4, Friday Sep 22 by 5:59
Quotation
Paraphrase
Summary
Difference
Difference
Difference
Matches the source word
for word.
You use the sources words.
Same length as in the
source, unless you follow the
rules for adding or deleting
material from a quote.
Matches the source in terms
STAT425
Practice Problems
Solution 1
1. Consider the simple linear regression model
yi = 0 + 1 xi + ei ,
i = 1, . . . , n,
(1)
with E[ei ] = 0, Var(ei ) = 2 , and Cov(ei , ej ) = 0 for i 6= j. Let 0 , 1 denote the LS
estimators of 0 and 1 , respectively.