83 introducon to informaon retrieval 10 introducon

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: red white heart a+ack effec/ve   Evaluate whether the doc addresses the informa)on need, not whether it has these words   TREC  ­ Na)onal Ins)tute of Standards and Technology (NIST) has run a large IR test bed for many years   Reuters and other benchmark doc collec)ons used   “Retrieval tasks” specified   some)mes as queries   Human experts mark, for each query and for each doc, Relevant or Nonrelevant   or at least for subset of docs that some system returned for that query 9 Sec. 8.3 Introduc)on to Informa)on Retrieval 10 Introduc)on to Informa)on Retrieval Sec. 8.3 Unranked retrieval evalua)on: Precision and Recall Should we instead use the accuracy measure for evalua)on?   Precision: frac)on of retrieved docs that are relevant = P(relevant|retrieved)   Recall: frac)on of relevant docs that are retrieved = P(retrieved|relevant)   Given a query, an...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online