jurafsky&martin_3rdEd_17 (1).pdf

Two common approaches to sequence modeling are a

Info icon This preview shows pages 164–166. Sign up to view the full content.

Two common approaches to sequence modeling are a generative approach, HMM tagging, and a discriminative approach, MEMM tagging. The probabilities in HMM taggers are estimated, not using EM, but directly by maximum likelihood estimation on hand-labeled training corpora. The Viterbi algorithm is used to find the most likely tag sequence Maximum entropy Markov model or MEMM taggers train logistic regres- sion models to pick the best tag given an observation word and its context and the previous tags, and then use Viterbi to choose the best sequence of tags for the sentence. More complex augmentions of the MEMM exist, like the Conditional Random Field (CRF) tagger. Modern taggers are generally run bidirectionally . Bibliographical and Historical Notes What is probably the earliest part-of-speech tagger was part of the parser in Zellig Harris’s Transformations and Discourse Analysis Project (TDAP), implemented be- tween June 1958 and July 1959 at the University of Pennsylvania (Harris, 1962) , although earlier systems had used part-of-speech information in dictionaries. TDAP used 14 hand-written rules for part-of-speech disambiguation; the use of part-of- speech tag sequences and the relative frequency of tags for a word prefigures all modern algorithms. The parser, whose implementation essentially corresponded a cascade of finite-state transducers, was reimplemented ( Joshi and Hopely 1999 ; Karttunen 1999 ). The Computational Grammar Coder (CGC) of Klein and Simmons (1963) had three components: a lexicon, a morphological analyzer, and a context disambigua- tor. The small 1500-word lexicon listed only function words and other irregular words. The morphological analyzer used inflectional and derivational suffixes to as- sign part-of-speech classes. These were run over words to produce candidate parts- of-speech which were then disambiguated by a set of 500 context rules by relying on surrounding islands of unambiguous words. For example, one rule said that be- tween an ARTICLE and a VERB, the only allowable sequences were ADJ-NOUN, NOUN-ADVERB, or NOUN-NOUN. The CGC algorithm reported 90% accuracy on applying a 30-tag tagset to a corpus of articles. The TAGGIT tagger (Greene and Rubin, 1971) was based on the Klein and Sim- mons (1963) system, using the same architecture but increasing the size of the dic- tionary and the size of the tagset to 87 tags. TAGGIT was applied to the Brown corpus and, according to Francis and Kuˇcera (1982, p. 9) , accurately tagged 77% of the corpus; the remainder of the Brown corpus was then tagged by hand. All these early algorithms were based on a two-stage architecture in which a dictionary was first used to assign each word a list of potential parts-of-speech and in the second stage large lists of hand-written disambiguation rules winnow down this list to a single part of speech for each word.
Image of page 164

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

B IBLIOGRAPHICAL AND H ISTORICAL N OTES 165 Soon afterwards the alternative probabilistic architectures began to be developed.
Image of page 165
Image of page 166
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern