Abstract
We study the effectiveness of subagging, or subsample aggregating, on regression trees, a
popular non-parametric method in machine learning. First, we give sufficient conditions
for pointwise consistency of trees. We formalize that (i) the bias depends on the diameter
of cells, hence trees with few splits tend to be biased, and (ii) the variance depends on the
number of observations in cells, hence trees with many splits tend to have large variance.
While these statements for bias and variance are known to hold globally in the covariate
space, we show that, under some constraints, they are also true locally. Second, we compare
the performance of subagging to that of trees across different numbers of splits. We find
that (1) for any given number of splits, subagging improves upon a single tree, and (2)
this improvement is larger for many splits than it is for few splits. However, (3) a single
tree grown at optimal size can outperform subagging if the size of its individual trees
is not optimally chosen. This last result goes against common practice of growing large
randomized trees to eliminate bias and then averaging to reduce variance.
popular non-parametric method in machine learning. First, we give sufficient conditions
for pointwise consistency of trees. We formalize that (i) the bias depends on the diameter
of cells, hence trees with few splits tend to be biased, and (ii) the variance depends on the
number of observations in cells, hence trees with many splits tend to have large variance.
While these statements for bias and variance are known to hold globally in the covariate
space, we show that, under some constraints, they are also true locally. Second, we compare
the performance of subagging to that of trees across different numbers of splits. We find
that (1) for any given number of splits, subagging improves upon a single tree, and (2)
this improvement is larger for many splits than it is for few splits. However, (3) a single
tree grown at optimal size can outperform subagging if the size of its individual trees
is not optimally chosen. This last result goes against common practice of growing large
randomized trees to eliminate bias and then averaging to reduce variance.
Original language | English |
---|---|
Pages | 1-29 |
Number of pages | 29 |
DOIs | |
Publication status | Published - 2 Apr 2024 |
Publication series
Name | ArXiv Pre-print |
---|
Keywords
- regression trees
- pointwise consistency
- bias-variance trade-off
- bagging
- CART
- performance at optimal sizes
- performance across sizes