Applying 2022-01-15 shrubland models to all lidar coverages.
This report is a companion to Shrubland 1.0: The Gang’s All Here. It describes the same models as that document.
The logistic ensemble model was used to predict shrubland in all areas with LiDAR-derived CHMs. Predictions were restricted to only pixels of vegetated LCPRI classes (excluding developed, barren, ice/snow, and water). The predicted probabilities were then classified using four thresholds, documented in the model report document, chosen to target certain values of specificity in order to improve positive predictive value.
These results pool training, validation, and test pixels with the remainder of the state. As only 00.02% of pixels were used in any of these roles, these results are for all practical purposes equal to out-of-bag accuracy.
Threshold: 0.493
Confusion Matrix and Statistics
1 0
1 1727907 17395571
0 379570 62374609
Accuracy : 0.7829
95% CI : (0.7828, 0.783)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.1221
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.81989
Specificity : 0.78193
Pos Pred Value : 0.09036
Neg Pred Value : 0.99395
Prevalence : 0.02574
Detection Rate : 0.02110
Detection Prevalence : 0.23356
Balanced Accuracy : 0.80091
'Positive' Class : 1
Threshold: 0.759
Confusion Matrix and Statistics
1 0
1 1290887 7733699
0 816590 72036481
Accuracy : 0.8956
95% CI : (0.8955, 0.8956)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.1985
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.61253
Specificity : 0.90305
Pos Pred Value : 0.14304
Neg Pred Value : 0.98879
Prevalence : 0.02574
Detection Rate : 0.01577
Detection Prevalence : 0.11022
Balanced Accuracy : 0.75779
'Positive' Class : 1
Threshold: 0.854
Confusion Matrix and Statistics
1 0
1 850761 3264302
0 1256716 76505878
Accuracy : 0.9448
95% CI : (0.9447, 0.9448)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2478
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.40369
Specificity : 0.95908
Pos Pred Value : 0.20674
Neg Pred Value : 0.98384
Prevalence : 0.02574
Detection Rate : 0.01039
Detection Prevalence : 0.05026
Balanced Accuracy : 0.68138
'Positive' Class : 1
Threshold: 0.891
Confusion Matrix and Statistics
1 0
1 490841 1341543
0 1616636 78428637
Accuracy : 0.9639
95% CI : (0.9638, 0.9639)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.2307
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.232905
Specificity : 0.983182
Pos Pred Value : 0.267870
Neg Pred Value : 0.979803
Prevalence : 0.025739
Detection Rate : 0.005995
Detection Prevalence : 0.022380
Balanced Accuracy : 0.608043
'Positive' Class : 1
Threshold: 0.910
Confusion Matrix and Statistics
1 0
1 223049 492771
0 1884428 79277409
Accuracy : 0.971
95% CI : (0.9709, 0.971)
No Information Rate : 0.9743
P-Value [Acc > NIR] : 1
Kappa : 0.1469
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.105837
Specificity : 0.993823
Pos Pred Value : 0.311599
Neg Pred Value : 0.976782
Prevalence : 0.025739
Detection Rate : 0.002724
Detection Prevalence : 0.008743
Balanced Accuracy : 0.549830
'Positive' Class : 1
Raster files can be downloaded from GitHub at this link.
Files ending in classified
have been classified into four categories:
Files not ending in classified
represent the binary shrubland maps (1 = shrub) using the threshold specified above.
The probability map is too large to share easily; I’ll be adding it to Labrador in the near future.
If you see mistakes or want to suggest changes, please create an issue on the source repository.
For attribution, please cite this work as
Mahoney (2022, Jan. 15). CAFRI Labs: Shrubland 1.0: Statewide Prediction Accuracy. Retrieved from https://cafri-labs.github.io/acceptable-growing-stock/posts/shrubland-10-statewide-prediction-accuracy/
BibTeX citation
@misc{mahoney2022shrubland, author = {Mahoney, Mike}, title = {CAFRI Labs: Shrubland 1.0: Statewide Prediction Accuracy}, url = {https://cafri-labs.github.io/acceptable-growing-stock/posts/shrubland-10-statewide-prediction-accuracy/}, year = {2022} }