This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Week 10 Lecture 18 Linear or nonlinear estimation The sparsity of the coe cients may be possibly quanti&amp;ed using l p norms k &amp; k p , which track sparsity for p &lt; 2 , with smaller p giving more stringent measures. For instance, k (1 ; ;: :: ; 0) k 2 = k ( ;; ::: ; ) k 2 when = 1 = p n , but apparently (1 ; ;: :: ; 0) is more sparse than ( ;; ::: ; ) , however k (1 ; ;: :: ; 0) k 1 &amp; k ( ;; :: :; ) k 1 = p n . Still, I am not clear how good l p norm is to quantify the sparsity, but it is often convenient to do some analysis there. Suppose that we observe n dim data y i = &amp; i + z i , i = 1 ; 2 ;: :: ;n and &amp; = &amp; n;p ( C ) = n &amp; : k &amp; k p p M p o . Question : For p &lt; 2 , is it true that R L (&amp;) = sup n X 2 2 i = &amp; 2 + 2 i : X p i M p o ? Can we apply minimax theorem here? No. Lemma : Let &amp; be solid orthosymmetric and compact. Then R L (&amp; ; ) = R L ( QHull (&amp;) , ) where QHull (&amp;) = n &amp;; k &amp; k 2 2 2 Hull &amp; &amp; 2 o Proof: R L (&amp;) = inf c sup &amp; n X c 2 i 2 + (1 c i ) 2 &amp; 2 i o = inf c sup QHull (&amp;) n X c 2 i 2 + (1 c i ) 2 &amp; 2 i o . So for p 2 , R L (&amp;) = inf c n nc 2 2 + (1 c ) 2 M 2 o = M 2 1 + M 2 . We could do similar calculations for p 2 case. For p 2 , we want to maximizes 2 X i &amp; M &amp; 1 u i 2 =p = 1 + &amp; M &amp; 1 u i 2 =p 2 n P&amp; C &amp; 1 u i 2 =p n + P ( C &amp; 1 u i ) 2...
View
Full
Document
This note was uploaded on 11/06/2009 for the course STAT 680 at Yale.
 '09
 HarrisonH.Zhou

Click to edit the document details