{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


27 introducon to informaon retrieval sec 85 kappa

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: vels   MAP for query collec)on is arithme)c ave.   Macro ­averaging: each query counts equally   R ­precision   If we have a known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of the top Rel docs returned   Perfect system could score 1.0. 23 24 4 Introduc)on to Informa)on Retrieval Sec. 8.4 Introduc)on to Informa)on Retrieval Variance   For a test collec)on, it is usual that a system does crummily on some informa)on needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)   Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. CREATING TEST COLLECTIONS FOR IR EVALUATION   That is, there are easy informa)on needs and hard ones! 25 Introduc)on to Informa)on Retri...
View Full Document

{[ snackBarMessage ]}