Lecture_18 - Evaluation in IR Introduction to Information...

Info icon This preview shows pages 1–9. Sign up to view the full content.

Evaluation in IR Introduction to Information Retrieval INF 141/ CS 121 Donald J. Patterson Content adapted from Hinrich Schütze
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Information need Evaluation in IR Remember the user has an information need not a query Relevance is assessed in relation to the information need, not the query e.g., I am looking for information on whether drinking red wine is more effective than eating chocolate at reducing risk of heart attacks Query: red wine heart attack effective chocolate risk Does the document address the need , not the query
Image of page 2
Relevance benchmarks Evaluation in IR TREC - National Institute of Standards and Testing (NIST) has run a large IR test bed for many years Reuters and other benchmark document collections Retrieval tasks which are specified sometimes as queries Human experts mark, for each query and for each document Relevant or Irrelevant
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Unranked retrieval Evaluation in IR Precision: Fraction of retrieved documents that are relevant Recall: Fraction of relevant documents that are retrieved
Image of page 4
Unranked retrieval Evaluation in IR Precision: Fraction of retrieved documents that are relevant Recall: Fraction of relevant documents that are retrieved Relevant Not Relevant Retrieved TP FP Not Retrieved FN TN
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Unranked retrieval Evaluation in IR Precision: Fraction of retrieved documents that are relevant Recall: Fraction of relevant documents that are retrieved Precision = TP TP + FP Recall = TP TP + FN Relevant Not Relevant Retrieved TP FP Not Retrieved FN TN ? ?
Image of page 6
Unranked retrieval - Accuracy Evaluation in IR The difficulty with measuring “accuracy” In one sense accuracy is how many judgments you make correctly Relevant Not Relevant Retrieved TP FP Not Retrieved FN TN Accuracy = TP + TN TP + FP + FN + TN
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Exercise
Image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.
  • Winter '08
  • Patterson,D
  • Accuracy and precision, Reuters, Statistical classification, ot Relevant

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern