lecture8-evaluation-handout-6-per

Macroaveraging each query counts equally rprecision

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Graphs are good, but people want summary measures!   Precision at fixed retrieval level   Precision ­at ­k: Precision of top k results   Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages   But: averages badly and has an arbitrary parameter of k   11 ­point interpolated average precision   The standard measure in the early TREC compe))ons: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpola)on (the value for 0 is always interpolated!), and average them   Evaluates performance at all recall levels 21 Introduc)on to Informa)on Retrieval Sec. 8.4 Typical (good) 11 point precisions 22 Introduc)on to Informa)on Retrieval Sec. 8.4 Yet more evalua)on measures…   Mean average precision (MAP)   SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)   Average of the precision value obtained for the top k documents, each )me a relevant doc is retrieved   Avoids interpola)on, use of fixed recall le...
View Full Document

This document was uploaded on 02/26/2014.

Ask a homework question - tutors are online