Adding in those missing 29 plots to the same Big Tune recipe. 2021-05-13
Last iteration of these models: 2021-02-02
RF (ranger) | GBM (LightGBM) | SVM (kernlab) | Ensemble (model weighted) | Ensemble (RMSE weighted) | |
---|---|---|---|---|---|
RMSE | 37.507 | 37.052 | 37.381 | 36.861 | 36.618 |
MBE | -1.157 | -1.853 | -3.273 | -1.556 | -2.059 |
R2 | 0.788 | 0.793 | 0.789 | 0.793 | 0.799 |
summary(bind_rows(training, testing)$agb_mgha)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.000 9.386 86.165 92.153 149.404 425.363
Across 1000 bootstrap iterations, our ensemble model had a mean RMSE of 36.838 \(\pm\) 0.331.
RMSE | Min | Median | Max |
---|---|---|---|
Rf | 34.156 | 40.022 | 44.767 |
Lgb | 34.409 | 40.412 | 45.238 |
Svm | 34.603 | 41.073 | 47.701 |
Ensemble | 33.750 | 39.950 | 44.774 |
R2 | Min | Median | Max |
---|---|---|---|
rf | 0.670 | 0.740 | 0.796 |
lgb | 0.664 | 0.734 | 0.794 |
svm | 0.637 | 0.723 | 0.784 |
ensemble | 0.675 | 0.741 | 0.800 |
lgb rf svm
0.3396700 0.3457726 0.3145575
Call:
lm(formula = agb_mgha ~ rf_pred * lgb_pred * svm_pred, data = pred_values)
Residuals:
Min 1Q Median 3Q Max
-127.608 -22.060 -0.713 15.174 203.474
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.306e+00 5.860e-01 -5.642 1.70e-08 ***
rf_pred 1.530e-01 8.414e-02 1.818 0.06909 .
lgb_pred 8.837e-01 8.910e-02 9.918 < 2e-16 ***
svm_pred 1.740e-01 5.341e-02 3.257 0.00113 **
rf_pred:lgb_pred -1.790e-03 3.709e-04 -4.826 1.40e-06 ***
rf_pred:svm_pred 4.294e-03 6.355e-04 6.758 1.44e-11 ***
lgb_pred:svm_pred -5.143e-03 6.553e-04 -7.849 4.36e-15 ***
rf_pred:lgb_pred:svm_pred 9.149e-06 1.368e-06 6.688 2.31e-11 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 39.72 on 23492 degrees of freedom
Multiple R-squared: 0.7385, Adjusted R-squared: 0.7384
F-statistic: 9476 on 7 and 23492 DF, p-value: < 2.2e-16
Random forest:
$num.trees
[1] 1000
$mtry
[1] 18
$min.node.size
[1] 7
$sample.fraction
[1] 0.2
$splitrule
[1] "variance"
$replace
[1] TRUE
$formula
agb_mgha ~ .
LGB:
$learning_rate
[1] 0.05
$nrounds
[1] 100
$num_leaves
[1] 5
$max_depth
[1] 2
$extra_trees
[1] TRUE
$min_data_in_leaf
[1] 10
$bagging_fraction
[1] 0.3
$bagging_freq
[1] 1
$feature_fraction
[1] 0.4
$min_data_in_bin
[1] 8
$lambda_l1
[1] 5
$lambda_l2
[1] 1
$force_col_wise
[1] TRUE
SVM:
$x
agb_mgha ~ .
$kernel
[1] "laplacedot"
$type
[1] "eps-svr"
$kpar
$kpar$sigma
[1] 0.0078125
$C
[1] 12
$epsilon
[1] 1.525879e-05
If you see mistakes or want to suggest changes, please create an issue on the source repository.
For attribution, please cite this work as
Mahoney (2021, May 13). CAFRI Labs: 1.0.1: Big Tune and Then Some. Retrieved from https://cafri-labs.github.io/acceptable-growing-stock/posts/101-even-bigger-tune/
BibTeX citation
@misc{mahoney20211.0.1:, author = {Mahoney, Mike}, title = {CAFRI Labs: 1.0.1: Big Tune and Then Some}, url = {https://cafri-labs.github.io/acceptable-growing-stock/posts/101-even-bigger-tune/}, year = {2021} }