lecture8-evaluation-handout-6-per

We will detail a methodology here then examine its

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 6.2 Introduc)on to Informa)on Retrieval Measuring user happiness Introduc)on to Informa)on Retrieval Sec. 8.1 Happiness: elusive to measure   Enterprise (company/govt/academic): Care about “user produc)vity”   Most common proxy: relevance of search results   But how do you measure relevance?   We will detail a methodology here, then examine its issues   Relevance measurement requires 3 elements:   How much )me do my users save when looking for informa)on?   Many other criteria having to do with breadth of access, secure access, etc. 1.  A benchmark document collec)on 2.  A benchmark suite of queries 3.  A usually binary assessment of either Relevant or Nonrelevant for each query and each document   Some work on more ­than ­binary, but not the standard 7 Sec. 8.1 Introduc)on to Informa)on Retrieval 8 Introduc)on to Informa)on Retrieval Sec. 8.2 Evalua)ng an IR system Standard relevance benchmarks   Note: the informa(on need is translated into a query   Relevance is assessed rela)ve to the informa(on need not the query   E.g., Informa)on need: I'm looking for informa)on on whether drinking red wine is more effec)ve at reducing your risk of heart a<acks than white wine.   Query: wine...
View Full Document

Ask a homework question - tutors are online