Week10Student2009

Week10Student2009 - Week 10 Lecture 18 Linear or nonlinear...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Week 10 Lecture 18 Linear or nonlinear estimation The sparsity of the coe cients may be possibly quanti&ed using l p norms k & k p , which track sparsity for p < 2 , with smaller p giving more stringent measures. For instance, k (1 ; ;: :: ; 0) k 2 = k ( ;; ::: ; ) k 2 when = 1 = p n , but apparently (1 ; ;: :: ; 0) is more sparse than ( ;; ::: ; ) , however k (1 ; ;: :: ; 0) k 1 & k ( ;; :: :; ) k 1 = p n . Still, I am not clear how good l p norm is to quantify the sparsity, but it is often convenient to do some analysis there. Suppose that we observe n dim data y i = & i + z i , i = 1 ; 2 ;: :: ;n and & = & n;p ( C ) = n & : k & k p p M p o . Question : For p < 2 , is it true that R L (&) = sup n X 2 2 i = & 2 + 2 i : X p i M p o ? Can we apply minimax theorem here? No. Lemma : Let & be solid orthosymmetric and compact. Then R L (& ; ) = R L ( QHull (&) , ) where QHull (&) = n &; k & k 2 2 2 Hull & & 2 o Proof: R L (&) = inf c sup & n X c 2 i 2 + (1 c i ) 2 & 2 i o = inf c sup QHull (&) n X c 2 i 2 + (1 c i ) 2 & 2 i o . So for p 2 , R L (&) = inf c n nc 2 2 + (1 c ) 2 M 2 o = M 2 1 + M 2 . We could do similar calculations for p 2 case. For p 2 , we want to maximizes 2 X i & M & 1 u i 2 =p = 1 + & M & 1 u i 2 =p 2 n P& C & 1 u i 2 =p n + P ( C & 1 u i ) 2...
View Full Document

This note was uploaded on 11/06/2009 for the course STAT 680 at Yale.

Page1 / 5

Week10Student2009 - Week 10 Lecture 18 Linear or nonlinear...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online