Abstract
Random forests perform boostrap-aggregation by sampling the training samples with replacement. This enables the evaluation of out-of-bag error which serves as a internal cross-validation mechanism. Our motivation lies in the using of the unsampled training samples to improve the ensemble of decision trees. In this paper we study the effect of using the out-of-bag samples to improve the generalization error first of the decision trees and second the random forest by post-pruning. A preliminary empirical study on four UCI repository datasets show consistent decrease in the size of the forests without considerable loss in accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Biau, G., Devroye, L., Lugosi, G.: Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 9, 2015–2033 (2008)
Breiman, L., Friedman, J., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman and Hall, New York (1984)
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Breiman, L.: Out-of-bag estimation. Technical report Statistics Department, University of California Berkeley (1996)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Criminisi, A., Shotton, J., Konukoglu, E.: Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends. Comput. Graph. Vis. 7, 81–227 (2012)
Denil, M., Matheson, D., De Freitas, N.: Narrowing the gap: random forests in theory and in practice. In: ICML, pp. 665–673 (2014)
Devroye, L., Gyrfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Applications of Mathematics. Springer, New York (1996)
Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd edn. Springer, Berlin (2009)
Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)
Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml
Mingers, J.: An empirical comparison of pruning methods for decision tree induction. Mach. Learn. 4(2), 227–243 (1989)
Rakotomalala, R.: Graphes d’induction. Ph.D. thesis, L’Universit Claude Bernard - Lyon I (1997)
Segal, M.R.: Machine learning benchmarks and random forest regression. Center for Bioinformatics and Molecular Biostatistics (2004). http://escholarship.org/uc/item/35x3v9t4
Torgo, L.: Inductive learning of tree-based regression models. Ph.D. thesis, Universidade do Porto (1999)
Weiss, S.M., Indurkhya, N.: Decision tree pruning: biased or optimal? In: AAAI, pp. 626–632 (1994)
Wyner, A.J., Olson, M., Bleich, J., Mease, D.: Explaining the success of adaboost and random forests as interpolating classifiers. arXiv preprint arXiv:1504.07676 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Kiran, B.R., Serra, J. (2017). Cost-Complexity Pruning of Random Forests. In: Angulo, J., Velasco-Forero, S., Meyer, F. (eds) Mathematical Morphology and Its Applications to Signal and Image Processing. ISMM 2017. Lecture Notes in Computer Science(), vol 10225. Springer, Cham. https://doi.org/10.1007/978-3-319-57240-6_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-57240-6_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-57239-0
Online ISBN: 978-3-319-57240-6
eBook Packages: Computer ScienceComputer Science (R0)