freund-adaboost96

freund-adaboost96 - Experiments with a New Boosting...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Experiments with a New Boosting Algorithm DRAFT — PLEASE DO NOT DISTRIBUTE Yoav Freund Robert E. Schapire 600 Mountain Avenue Rooms 2B-428, 2A-424 Murray Hill, NJ 07974-0636 yoav, schapire @research.att.com http://www.research.att.com/orgs/ssr/people/ yoav,schapire / January 22, 1996 Abstract In an earlier paper [9], we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss” which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman’s [1] “bagging” method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1 Introduction “Boosting” is a general method for improving the performance of any learning algorithm. In theory, boosting can be used to significantly reduce the error of any “weak” learning algorithm that consistently generates classifiers which need only be a little bit better than random guessing. Despite the potential benefits of boosting promised by the theoretical results, the true practical value of boosting can only be assessed by testing the method on “real” learning problems. In this paper, we present such an experimental assessment of a new boosting algorithm called AdaBoost . Boosting works by repeatedly running a given weak 1 learning algorithm on various distributions over the training data, and then combining the classifiers produced by the weak learner into a single composite classifier. The first provably effective boosting algorithms were presented by Schapire [19] and Freund [8]. More recently, we described and analyzed AdaBoost , and we argued that this new boosting algorithm has certain properties which make it more practical and easier to implement than its predecessors [9]. This algorithm, which we used in all our experiments, is described in detail in Section 2. This paper describes two distinct sets of experiments. In the first set of experiments, described in Section 3, we compared boosting to “bagging,” a method described by Breiman [1] which works in the same general fashion (i.e., by repeatedly rerunning a given weak learning algorithm, and combining the computed classifiers), but which constructs each distribution in a simpler manner. (Details given below.)
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/06/2008 for the course CS 290I taught by Professor Wang during the Spring '07 term at UCSB.

Page1 / 16

freund-adaboost96 - Experiments with a New Boosting...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online