Applying 2022-01-16 shrubland models to all lidar coverages.
This report is a companion to Shrubland 1.0.1: Supersize Me. It describes the same models as that document.
The logistic ensemble model was used to predict shrubland in all areas with LiDAR-derived CHMs. Predictions were restricted to only pixels of vegetated LCPRI classes (excluding developed, barren, ice/snow, and water). The predicted probabilities were then classified using four thresholds, documented in the model report document, chosen to target certain values of specificity in order to improve positive predictive value.
These results pool training, validation, and test pixels with the remainder of the state. As only 00.02% of pixels were used in any of these roles, these results are for all practical purposes equal to out-of-bag accuracy.
Threshold: 0.141
Confusion Matrix and Statistics
1 0
1 1798249 17924494
0 309228 61845686
Accuracy : 0.7773
95% CI : (0.7772, 0.7774)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.124
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.85327
Specificity : 0.77530
Pos Pred Value : 0.09118
Neg Pred Value : 0.99502
Prevalence : 0.02574
Detection Rate : 0.02196
Detection Prevalence : 0.24088
Balanced Accuracy : 0.81428
'Positive' Class : 1
Threshold: 0.332
Confusion Matrix and Statistics
1 0
1 1408450 7934939
0 699027 71835241
Accuracy : 0.8946
95% CI : (0.8945, 0.8946)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2129
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.66831
Specificity : 0.90053
Pos Pred Value : 0.15074
Neg Pred Value : 0.99036
Prevalence : 0.02574
Detection Rate : 0.01720
Detection Prevalence : 0.11411
Balanced Accuracy : 0.78442
'Positive' Class : 1
Threshold: 0.523
Confusion Matrix and Statistics
1 0
1 1103682 4229838
0 1003795 75540342
Accuracy : 0.9361
95% CI : (0.936, 0.9361)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2697
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.52370
Specificity : 0.94697
Pos Pred Value : 0.20693
Neg Pred Value : 0.98689
Prevalence : 0.02574
Detection Rate : 0.01348
Detection Prevalence : 0.06514
Balanced Accuracy : 0.73534
'Positive' Class : 1
Threshold: 0.677
Confusion Matrix and Statistics
1 0
1 819812 2191544
0 1287665 77578636
Accuracy : 0.9575
95% CI : (0.9575, 0.9576)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2991
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.38900
Specificity : 0.97253
Pos Pred Value : 0.27224
Neg Pred Value : 0.98367
Prevalence : 0.02574
Detection Rate : 0.01001
Detection Prevalence : 0.03678
Balanced Accuracy : 0.68076
'Positive' Class : 1
Threshold: 0.818
Confusion Matrix and Statistics
1 0
1 516250 881451
0 1591227 78888729
Accuracy : 0.9698
95% CI : (0.9698, 0.9698)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2798
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.244961
Specificity : 0.988950
Pos Pred Value : 0.369357
Neg Pred Value : 0.980228
Prevalence : 0.025739
Detection Rate : 0.006305
Detection Prevalence : 0.017071
Balanced Accuracy : 0.616956
'Positive' Class : 1
Raster files can be downloaded from GitHub at this link.
Files ending in classified
have been classified into four categories:
Files not ending in classified
represent the binary shrubland maps (1 = shrub) using the threshold specified above.
The probability map is too large to share easily; I’ll be adding it to Labrador in the near future.
If you see mistakes or want to suggest changes, please create an issue on the source repository.
For attribution, please cite this work as
Mahoney (2022, Jan. 19). CAFRI Labs: Shrubland 1.0.1: Statewide Prediction Accuracy. Retrieved from https://cafri-labs.github.io/acceptable-growing-stock/posts/shrubland-101-statewide-prediction-accuracy/
BibTeX citation
@misc{mahoney2022shrubland, author = {Mahoney, Mike}, title = {CAFRI Labs: Shrubland 1.0.1: Statewide Prediction Accuracy}, url = {https://cafri-labs.github.io/acceptable-growing-stock/posts/shrubland-101-statewide-prediction-accuracy/}, year = {2022} }