To assess the effect of the default choice of the parameter K on these

To assess the effect of the default choice of the

This preview shows page 8 - 10 out of 40 pages.

To assess the effect of the default choice of the parameterKon these conclusions, wedid run a side experiment where the value ofKwas adjusted for each run of the Extra-Treesmethod by using a 10-fold cross-validation technique internal to the learning sample. Thedetailed numerical results are given in Table8(Appendix D) in the column denoted ETcv;
Background image
Mach Learn () :significance tests show that 22 times out of 24 the ETcvvariant performs the same as ETd,and two times it wins (on the Segment and Isolet datasets, where slightly better results areobtained forKvalues higher than the default setting). In terms of Win/Draw/Loss reportswith respect to the other methods, the ETcvvariant also appears slightly better than ETd.Finally, the comparison of the ETcvversion with an ETvariant (see Table8) shows thatthere is no significant difference (24 draws on 24 datasets) between these two variants.These results confirm that the conclusions in terms of accuracy would have been affectedonly marginally in favor of the Extra-Trees algorithm if we had used the version whichadjustsKby cross-validation instead of the default settings. Given the very small gain inaccuracy with respect to the default setting and the very high cost in terms of computationalburden, we do not advocate the use of the cross-validation procedure together with the Extra-Trees algorithm, except in very special circumstances where one cana prioriforesee that thedefault value would be suboptimal. These issues are further discussed in Section3, togetherwith the analysis of the other two parameters of the method.2.3. Computational requirementsWe compare Extra-Trees with CART, Tree Bagging and Random Forests. In this comparison,we have used unpruned CART trees.3We also use the same default settings to run RandomForests as for Extra-Trees (K=norK=n), so as to put them in similar conditions fromthe computational point of view. Notice that we dropped the Random Subspace method forthis comparison, since its computational requirements are systematically larger than thoseof Random Forests under identical conditions.4Tables3and4provide respectively tree complexities (number of leaves of a tree or of anensemble of trees) and CPU times (in msec)5required by the tree growing phase, averagedover the 10 or 50 runs for each dataset. We report in the left part of these tables resultsfor classification problems, and in the right part those for regression problems. Notice that,because in regression the default value ofKis equal ton, the Random Forests methoddegenerates into Tree Bagging; so their results are merged in one column.Regarding complexity, Table3shows that Extra-Trees are between 1.5 and 3 times largerin terms of number of leaves than Random Forests. The average ratio is 2.69 over theclassification problems and 1.67 in regression problems. However, in terms of average treedepth, this increase in complexity is much smaller due to the exponential growth of thenumber of leaves with tree depth. Thus, Extra-Trees are on the average no more than two
Background image
Image of page 10

You've reached the end of your free preview.

Want to read all 40 pages?

  • Spring '17
  • John P. Dickerson

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture