15-learning-for-sear - Adaptive Search Engines Foundations of Artificial Intelligence Learning Ranking Functions for Search Engines CS472 Fall 2007

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Foundations of Artificial Intelligence Learning Ranking Functions for Search Engines CS472 – Fall 2007 Thorsten Joachims Joint work with: Filip Radlinski, Geri Gay, Laura Granka, Helene Hembrooke, Bing Pang Adaptive Search Engines Current Search Engines – One-size-fits-all – Hand-tuned retrieval function Hypothesis – Different users need different retrieval functions – Different collections need different retrieval functions Machine Learning – Learn improved retrieval functions – User Feedback as training data Overview How can we get training data for learning improved retrieval functions? – Explicit vs. implicit feedback – User study with eye-tracking and relevance judgments – Absolute vs. relative feedback – Accuracy of implicit feedback What learning algorithms can use this training data effectively? – Ranking Support Vector Machine – User study with meta-search engine Sources of Feedback Explicit Feedback – Overhead for user – Only few users give feedback => not representative Implicit Feedback – Queries, clicks, time, mousing, scrolling, etc. – No Overhead – More difficult to interpret Feedback from Clickthrough Data 1. Kernel Machines http://svm.first.gmd.de/ 2. Support Vector Machine http://jbolivar.freeservers.com/ 3. SVM-Light Support Vector Machine http://ais.gmd.de/~thorsten/svm light/ 4. An Introduction to Support Vector Machines http://www.support-vector.net/ 5. Support Vector Machine and Kernel . .. References http://svm. research.bell-labs.com/SVMrefs.html 6. Archives of SUPPORT-VECTOR-MACHINES . .. http://www.jiscmail.ac.uk/lists/SUPPORT. .. 7. Lucent Technologies: SVM demo applet http://svm. research.bell-labs.com/SVT/SVMsvt.html 8. Royal Holloway Support Vector Machine http://svm.dcs.rhbnc.ac.uk (3 < 2), (7 < 2), (7 < 4), (7 < 5), (7 < 6) Rel(1), NotRel(2), Rel(3), NotRel(4), NotRel(5), NotRel(6), Rel(7) Relative Feedback: Clicks reflect preference between observed links. Absolute Feedback: The clicked links are relevant to the query. Is Implicit Feedback Reliable? How do users choose where to click? How many abstracts do users evaluate before clicking? Do users scan abstracts from top to bottom? Do users view all abstracts above a click? Do users look below a clicked abstract? How do clicks relate to relevance? Absolute Feedback: Are clicked links relevant? Are not clicked links not relevant?
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/19/2008 for the course CS 4700 taught by Professor Joachims during the Fall '07 term at Cornell University (Engineering School).

Page1 / 4

15-learning-for-sear - Adaptive Search Engines Foundations of Artificial Intelligence Learning Ranking Functions for Search Engines CS472 Fall 2007

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online