{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

4 these exets from 072 english pos self co training

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: tomatically label tags) N=1: Self-training N>1: Co-Training [Blum and Mitchell, 1998] n༆  assumed independent features sets, same learner n༆  n༆  Proved bounds on when this will work well, see paper! for POS, can do different models with the same features Accuracy as around reebank. re the acentences. [Clark, Curran, Osbourne, 2003] 0.8 0.78 0.76 -training 0.74 hich each cache at of a sining iteram 81.4% these exets, from 0.72 English POS Self-/Co- Training English POS Self/Co-Training Two POS Taggers 0 5 10 15 20 25 30 35 40 45 50 TwoAgreement-based co-training between C&C (MEMM) POS Taggers [Clark, TnT (HMM++) and Curran, Osbourne, 2003] Figure 4: Number of rounds n༆  T N T and C&C (50 seed sentences). The curve that starts at a higher value is for T N T. Self Training 0.915 Co-Training 0.92 0.92 TnT C&C 0.915 0.91 TnT C&C TnT C&C 0.91 0.9 0.895 0.89 0.905 Accuracy Accuracy Accuracy 0.905 0.9 0.895 Table 2: Naiv the amount a 0.89 0.885 0.885 0.88 0 5 10 15 20 25 30 35 40 45 Number of rounds 40 45 seed senr curve is ement cog through that maxformance 50 Figure 5: Self-training T N T and C&C (500 seed sentences). The upper curve is for T N T; the lower curve is for C&C. 500 seeds 50 0.88 0.88 5 10 15 20 25 30 35 40 45 50 Number of rounds Figure 6: Agreement-based co-training between T N T and C&C (500 seed sentences). The curve that starts at a higher value is for T N T. 500 seeds 12000 Towards the end of the co-training run, more material is being selected for C&C than T N T. The experiments using a seed set size of 50 showed a similar trend, but the difference between the two taggers was less marked. By examining the subsets chosen from the labelled cache at each round, we also observed that a large proportion of 0 C&C TnT tnt Table 3: Naiv the amount a 10000 8000 the previous required the more often ore conducting co-training or selfnt to allow co-training to protwo taggers trained on the entire conducting co-training or selfPOS-eval test set, none of them o taggers trained on the entire shown in Table 3. After adding S-eval test for none of them the accuracy set, both taggers was own in Table 3. After adding ly. These results demonstrate the accuracy for both taggers was re word use between the newswi These results demonstrate the of using a high quality in-domain rd Two POS use between the newswi peraining. However, this tagging re n༆  quality using a Taggersin-domain high ing. However, co-training signifi-training and this tagging per HMM of the two taggers over directly [Wang, signi outining and co-training Huang, us, with co-training strongly fiHarper, 2007] the two MEMM for tly taggers Not direc selfnaive co-training.over e ,ut in these experiments, we used with co-training strongly outn༆  HMM ive the four example selection co-training. Chinese ong CTB: Note for selfn༆  approachexperiments, weccuracy yields MEMM n thesePennthe best a used Treebank agreement-based co-training and g the four CTB: Chinese n༆  example selectio...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online