{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


LexicalAlignmentModel - A Lexical Alignment Model for...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
A Lexical Alignment Model for Probabilistic Textual Entailment Oren Glickman, Ido Dagan and Moshe Koppel Bar Ilan University, Ramat Gan 52900, ISRAEL, { glikmao, dagan, koppel } @cs.biu.ac.il , WWW home page: http://cs.biu.ac.il/~ { glikmao, dagan, koppel } / Abstract. This paper describes the Bar-Ilan system participating in the Recognising Textual Entailment Challenge. The paper proposes first a general probabilistic setting that formalizes the notion of textual en- tailment. We then describe a concrete alignment-based model for lexical entailment, which utilizes web co-occurrence statistics in a bag of words representation. Finally, we report the results of the model on the Recog- nising Textual Entailment challenge dataset along with some analysis. 1 Introduction Many Natural Language Processing (NLP) applications need to recognize when the meaning of one text can be expressed by, or inferred from, another text. Information Retrieval (IR), Question Answering (QA), Information Extraction (IE), text summarization and Machine Translation (MT) evaluation are exam- ples of applications that need to assess this semantic relationship between text segments. The Recognising Textual Entailment (RTE) task ([8]) has recently been proposed as an application independent framework for modeling such in- ferences. Within the applied textual entailment framework, a text t is said to entail a textual hypothesis h if the truth of h can be most likely inferred from t . Textual entailment indeed captures generically a broad range of inferences that are relevant for multiple applications. For example, a QA system has to identify texts that entail a hypothesized answer. Given the question “ Does John Speak French? ”, a text that includes the sentence “ John is a fluent French speaker ” entails the suggested answer “ John speaks French. ” In many cases, though, entailment inference is uncertain and has a probabilistic nature. For example, a text that includes the sentence “ John was born in France. ” does not strictly entail the above answer. Yet, it is clear that it does increase substantially the likelihood that the hypothesized answer is true. The uncertain nature of textual entailment calls for its explicit modeling in probabilistic terms. We therefore propose a general generative probabilistic setting for textual entailment, which allows a clear formulation of probability spaces and concrete probabilistic models for this task. We suggest that the pro- posed setting may provide a unifying framework for modeling uncertain semantic inferences from texts.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
An important sub task of textual entailment, which we term lexical entail- ment , is recognizing if the lexical concepts in a hypothesis h are entailed from a given text t , even if the relations which hold between these concepts in h may not be entailed from t . This is typically a necessary, but not sufficient, condition for textual entailment. For example, in order to infer from a text the hypothesis Chrysler stock rose ,” it is a necessary that the concepts of Chrysler , stock and rise must be inferred from the text. However, for proper entailment it is further
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}