Combining these bounds log π l y l x j0log πj j 1 y

This preview shows page 14 - 16 out of 45 pages.

Combining these bounds, log | π l ( y ) | ≤ l X j =0 log | π j \ j - 1 ( y ) | ≤ C l X j =0 ( m + 4 - j )2 j C ( m + 4 - l )2 l . Lemma 3.5. Under the setup of Theorem 1.6, let m = d log 2 p e and let G ( α, β ) be as in Lemma 3.1. Then there are constants C, c > 0 depending only on α and β such that for any l ∈ { 0 , 1 , . . . , m +3 } , j = l or j = l - 1 (if l 1 ), and t > 0 , P " sup y D p 2 π j ( y ) T K ( X ) π l \ l - 1 ( y ) > t and ( X, X 0 ) ∈ G ( α, β ) # 2 exp C ( m + 4 - l )2 l - ct 2 2 l/ 2 n 2 p 1 / 2 ( n + p ) ! . Proof. For notational convenience, define the event E := { ( X, X 0 ) ∈ G ( α, β ) } . Applying Lemma 3.4 and a union bound over { π l ( x ) : x D p 2 } , for any λ > 0, P " sup y D p 2 π j ( y ) K ( X ) π l \ l - 1 ( y ) > t and E # e C ( m +4 - l )2 l sup y ∈{ π l ( x ): x D p 2 } P π j ( y ) K ( X ) π l \ l - 1 ( y ) > t and E e C ( m +4 - l )2 l e - λt sup y ∈{ π l ( x ): x D p 2 } E h e λπ j ( y ) K ( X ) π l \ l - 1 ( y ) {E} i . Let Λ be the set of all diagonal matrices in R p × p with all diagonal entries in {- 1 , 1 } . Note that ( X, X 0 ) ∈ G ( α, β ) if and only if ( X, DX 0 ) ∈ G ( α, β ) for all D Λ. Then, conditional on X and the event E , X 0 equals DX 0 in law for D uniformly distributed over Λ. Hence E [ K ( X 0 ) | X, E ] = E [ K ( DX 0 ) | X, E ] = E [ E [ K ( DX 0 ) | X 0 , X, E ] | X, E ] = 0 , where the last equality follows from E [ K ( DX 0 ) | X 0 ] = 0 as the kernel function k is odd. Then Jensen’s inequality yields, for any y D p 2 and λ > 0, E h e - λπ j ( y ) K ( X 0 ) π l \ l - 1 ( y ) X, E i 1 , and so E h e λπ j ( y ) K ( X ) π l \ l - 1 ( y ) {E} i = E h e λπ j ( y ) K ( X ) π l \ l - 1 ( y ) E i P [ E ]
THE SPECTRAL NORM OF RANDOM INNER-PRODUCT KERNEL MATRICES 15 E h e λπ j ( y ) K ( X ) π l \ l - 1 ( y ) E h e - λπ j ( y ) K ( X 0 ) π l \ l - 1 ( y ) X, E i E i P [ E ] = E h e λπ j ( y )( K ( X ) - K ( X 0 )) π l \ l - 1 ( y ) E i P [ E ] = E h e λπ j ( y )( K ( X ) - K ( X 0 )) π l \ l - 1 ( y ) {E} i 2 exp 2 p 1 / 2 ( n + p ) 2 l/ 2 n 2 ! , where the last line applies Lemma 3.2 and the bound k π l \ l - 1 ( y ) k 2 - l/ 2 . Optimizing over λ yields the desired result. We now conclude the proof of Theorem 1.6. Proof of Theorem 1.6. For each l = 0 , . . . , m + 3, set t 2 l = C 0 ( m + 4 - l )2 l/ 2 p 1 / 2 ( n + p ) n 2 for a constant C 0 := C 0 ( α, β ). Let X 0 be an independent copy of X . Then by Lemma 3.5, for each l = 0 , . . . , m + 3 and j = l or j = l - 1, P " sup y D p 2 π j ( y ) T K ( X ) π l \ l - 1 ( y ) > t l and ( X, X 0 ) ∈ G ( α, β ) # 2 e - ( C - cC 0 )( m +4 - l )2 l . Recalling m = d log 2 p e , we may pick C 0 sufficiently large such that m +3 X l =0 4 e - ( C - cC 0 )( m +4 - l )2 l 4( m + 4) e - ( C - cC 0 )( m +4) C 0 p - α for a constant C 0 := C 0 ( α, β ). Then (7) and a union bound imply P " sup y D p 2 y T K ( X ) y > 2 m +3 X l =0 t l and ( X, X 0 ) ∈ G ( α, β ) # C 0 p - α . Finally, the bound 2 m +3 X l =0 t l < 2 C 1 / 2 0 p 1 / 4 ( n + p ) 1 / 2 n m +3 X l =0 ( m + 4 - l )2 l 4 = 2 C 1 / 2 0 p 1 / 4 ( n + p ) 1 / 2 n m +3 X l =0 l X j =0 2 j 4 Cp 1 / 4 ( n + p ) 1 / 2 p 1 / 4 n C max p n , r p n , the decomposition (7), and Lemmas 3.1 and 3.3 yield the desired result. 4. Decomposition of Hermite polynomials of sums of IID random variables In this section, we prove the approximation (8) formalized as the following proposition: Proposition 4.1. Let Z = ( z j : 1 j n ) R n , where z j are IID random variables such that E [ z j ] = E [ z 3 j ] = 0 , E [ z 2 j ] = 1 , and E [ | z j | l ] < for each l 1 . Let h d denote the orthonormal Hermite polynomial of degree d . Define q d,n ( Z ) = r 1 n d d !

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture