Week 1 Lecture 1 An overview Introduction: Parametric Estimation vs. Nonparametric Estimation I: Parametric density estimation : Let Y1 , Y2 , . . . , Yn i.i.d. with density f (x), 2 R (or R2 , or R10 ). For instance, " # 2 (x ) 1 exp ; 2 R; > 0. f ; (x)
Week 6 Lecture 11 Kernel regression and minimax rates Model: Observe Yi = f (Xi ) + i where i , i = 1, 2, . . . , n, i.i.d. with Ei = 0. We often assume Xi are i.i.d. or Xi = i/n. NadarayaWatson estimator Let wi (x) = K Xih x . The Nadaraya-Watson estimat
Week 5
Lecture 9 A lower bound by Tsybakov Parameter space = cfw_0 , 1 , . . . , M 2s, for all 0 i = j M. (1)
d (i , j )
Usually s is the rate of convergence you have obtained by a specic procedure, and d is a distance related to the loss function. Redu
Week 4 Lecture 7 Optimal rate of convergence in the sup-norm Model : Let Y1 ; Y2 , . . . , Yn i.i.d. on [0; 1] with density f 2 F (M ), Hlder ball of order . Minimax rate : It can be shown that b inf sup E f
^ f F (M ) 2
f
1
C
n log n
2 =(2 +1)
.
For simp
Week 3 Lecture 4 . n Model : Let Y1 ; Y2 , o. . , Yn i.i.d. on [ 1; 1] with density f 2 F , F = R 2 f; f (m) (x) dx M . b We have shown there is a kernel estimator fn such that sup
f 2F
Z
b E fn (x)
2
f (x)
Cn
2m=(2m+1)
.
Because it is hard to analyze the
Week 2 Lecture 3 General m . n Model: Let 2 Y1 ; Y2 , o. . , Yn i.i.d. R f; f (m) (x) dx M b Goal: Find f such that Z b f (x) f sup E
f 2F
on [0; 1] with density f 2 F, F =
2
CM n
2m=(2m+1)
(Note that K may not be nonnegative). The bias part is Z b Efn (x
Week 13 Lecture 24 Adaptive Wavelet Estimation Donoho and Johnstone (1995, JASA). Sketch of the proof. Consider the sequence model where yi = i + zi , i = 1; :; d and zi are independent normal N (0; 1) variables. Set r( ) = d 1 Pk^ k2 . The stein s 2 unbi
Week 12 Lecture 22 A group of students present Donoho and Johnstone (PTRF, 1994)?
1
Lecture 23 Review from the presentation Suppose we observe yi = where
i
+ zi ; i = 1; :; n,
=1
is constrained to lie in a ball of radius C dened by lp norm, n o = ; k kp C
Week 11 Lecture 20 An Introduction to Wavelet regression Denition: Wavelet is a function such that f2j=2 2j k ; j; k 2 Z g
is an orthonormal basis for L2 (R). This function is called mother wavelet which can be often constructed , from father wavelet '. T
Week 10 Lecture 18 Linear or nonlinear estimation The sparsity of the coe cients may be possibly quantied using lp norms k kp , which track sparsity for p < 2, with smaller p giving more stringent measures. For instance, p when = 1= n, but apparently (1;
Week 9 Lecture 16 Quadratic functional estimation Model: Observe the sequence model: yi =
i:i:d: i
+n
1=2
zi
where zi N (0; 1). The model comes from the white noise model (or many other models): dy (t) = f (t) dt + n 1=2 dB (t) , t 2 [0; 1] . Let f i (t)
Week 8 Lecture 15 Model: yi = where
M i
+ zi , zi
i:i:d:
N (0; 1) , )
2
M
is an ellipsoid in l2 (N): ( =
:
X
i
a2 2 ii P a2 i
M
2 i
. and ai ! 1, then
Pinsker Theorem: Let s RN ( ; )
=
:
M
RL ( ; ) as
! 0.
We will only prove this result for the following
Week 7 Lecture 12 Fourier estimation and Linear Minimaxity An orthonormal basis for L2 ([0; 1]) is
1 2k
(x) (x)
= =
1 p
2 cos (2 kt) ;
2k
(x) =
p
2 sin (2 kx) ; k
1.
f The periodic Sobolev class W2 (M ) is dened as F= f:
j
Z
1
f (m)
2
M ; f (j ) (0) = f (