k calculate e 1 ni t 0 1 i t 1 y qd q xji

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ) = 1 Hard EM for Naïve-Bayes x2{ 1,+1} Algorithm: For t = 1 . .=T1..T . n༆  For t 1)  [E-step] calculate most probable class for each training 1. For i xample ,:for y = 1 . . . k , calculate e = 1 . . . ni ⇢ ) t 0 1 ((i) t 1 ( y ) Qd q (xji |y ) t 1 0 =1 qj yj Pk Q (i ) t q t 1 (y ) d=1 qj 1 (xj |y ) y =1 j 1 if y = arg max p(y , x ; ✓ ((y||i) = p(y|x(i) ; ✓t 1 ) = yi = 0 otherwise ) 2. 2)  [M-step] new parameter values: likelihood estimates, given counts Calculate the compute maximum 1 q t (y ) = n n X i=1 ( y | i) t qj ( x| y ) = P (i) i : xj = x P i ( y | i) ( y | i) where ✓ t is a concatenation of the Naïve Bayes parameters q t (y ) t Output: Parameter values q T (yt. and q T (x|y ). and qj (x|y ) at iteration ) Figure 1: The EM Algorithm for Naive Bayes Models • For all y, j, x, qj (x|y ) 0 0. For all y 2 {1 . . . k }, for all j 2 {1 . . . d}, • For all y, j, x, qj (x|y ) 0. For all y 2 {1 . . . k }, for all j 2 {1 . . . d}, X X q 0 ( x| y ) = 1 j q 0 ( x| y ) = 1 j x2{ 1,+1} (Soft) EM for Naïve-Bayes x2{ 1,+1} Algorithm: Algorithm: For t = 1 t . . T . . . T For . = 1 n༆ For t = 1..T 1)  [E-step] n calculate posteriors (soft completions) for each training 1. = e =. 1 . .for i for =. 1 k .calculate 1. For i For1i.xample ,ya=y1class., y: , calculate . n, . nd . . . k (i ) t 1 QQ t q1 (y(y ) d d=1 t j 1 (xxi) |y )) q t 1 ( ( j |y j q ) j =1 qj j (=ip(y |p(y)|; ✓it) ; 1 ) = = P y | ) = x (i x ( ✓ t 1 ) Qd ( y | i) i t 1 (()) k Pk t1 Q q x i y) t y =1 q 1 (y(y ) d j =1 t j 1 ((xj ||y) ) j =1 qj y =1 q j 2. 2)  Calculate the new parameter values: 2. Calculate[M-step] parameter maximum likelihood estimates, given counts the new compute values: P P i : x( i ) = x ( y | i ) 1 t X P q t (y )1= n (y |i) t qj (x|y ) = i:x(i)j=x (y |i) j t n i=1 |i) P i ( y | i) qj ( x| y ) = q (y ) = (y n X n i=1 i ( y | i) where ✓ is a concatenation of the Naïve Bayes parameters q t (y ) t Output: Parameter values q T (yt. and q T (x|y ). and qj (x|y ) at iteration ) Output: Parameter values q T (y ) and q T (x|y ). t Figure 1: The EM Algorithm for Naive Bayes Models n༆  Can also do this when someNaive Bayes labeled Figure 1: The EM Algorithm for docs are Models EM Example (a) (a) r (b) ... Setup •  Cluster into 2 classes (r is binary) •  q(x|r) is a binary multinomial (bag of words) •  Only showing a subset of entries in q(x|r) Example from: Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008. document text hot chocolate cocoa beans cocoa ghana africa beans harvest ghana cocoa butter butter truffles sweet chocolate Parameter 0 xn E Step x2 M Step x1 docID 1 2 3 4 5 6 α1 r1,1 r 2,1 r3,1 r4,1 r5,1 r6,1 r7,1 r8,1 r9,1 r10,1 r11,1 qafrica,1 qafrica,2 qbrazil,1 qbrazil,2 qcocoa,1 qcocoa,2 qsugar,1 qsugar,2 qsweet,1 qsweet,2 1.00 0.00 1 0.50 1.00 0.50 0.50 0.50 0.50 1.00 0.00 0.00 0.00 0.50 0.50 0.0...
View Full Document

Ask a homework question - tutors are online