Canopy Delineation

Summary

CAFRI’s core contracts with DEC are built around mapping and monitoring forests, however our lab is more broadly interested in all woody vegetation regardless of it’s cover/use classifications. So for example, we would also like to map urban trees (parks, street trees, and beyond), any marginal or transitional vegetation (see shrublands for example), and really anything that has trees but doesn’t traditionally qualify as a ‘forest’. So we might as well just map everything right? Well, not that easy…We don’t want to be in the business of mapping/predicting in areas that are either a) not represented in our model training data or b) areas we our models can’t make accurate predictions. A prime example of this problem is our LiDAR-AGB mapping. Those models are built on LiDAR height metrics, regardless of the kind of sturcture. So to our LiDAR-based models, buildings look to contain AGB as much as forests do. And we want to be able to make those distinctions.

Our current modus-operandi is to use annual landcover predictions from LCMAP to mask many of our maps to vegetated classes. So we exclude pixels classified as water, developed, or barren. But we know we are missing some key areas like the urban forests mentioned above, and we are likely excluding lots of mixels that contain some portion of tree-cover along. A common example of this would be trees that border bodies of water and might be contained in pixels classified as water. But often shoreline trees are some of the biggest and baddest out there!

So the idea is that we want a more precise layer, a set of polygons delineating tree-canopy for the entire state (or some large portion of it), that we can use to constrain all of our mapping and modeling. There are many existing approaches out there, and they mostly rely on high-resolution aerial imagery (which we have statewide courtesy of NAIP) and LiDAR data (which we have for most of the state). We are open to any approaches that can give us statewide coverage at fine-resolutions (\(\leq\) 30m) with regular updates (annual, bi-annual, etc).

detectreeRGB

To date we have done some proof of concept work with a group in Cambridge that has developed an open-source classifier called detectreeRGB (see links below). We have provided them with NAIP imagery, and a set of manually delineated tree-crown polygons to use as training data. We have found this approach to be quite successful in urban/suburban areas, though it seems to struggle in closed-canopy conditions which might be OK, given that the latter is less challenging for us to identify with existing landcover products.

Training Data Generation

Training data is generated via heads up digitization (manual delineation) of tree crowns. We have used NAIP imagery, along with LiDAR derived canopy height models as reference information from which practitioners can derive crown boundaries. The Cambridge group recommended at least 30 square images (as tifs) each approximately 2-4 hectares in size with additional buffers such that all tree canopies within the tile are completely delineated (none are cut off). Alternatively providing a contiguous surface equal to the area of the 30 tiles is acceptable. The Cambridge group has a tiling script that can be leveraged to convert the larger contiguous rasters into smaller tiles.

Some tips for creating canopy polygons:

  • You can stretch the imagery bands (RGB) in QGIS (or other) to add contrast. Anecdotally setting the max to 180-190 on the R and B bands seems to help.
  • Overlaying an LiDAR-derived canopy height model can be another source of information when the imagery itself does not suffice.
  • Add a confidence value (1-10) which might be helpful later on for post-hoc analysis.

Training data developed by Sam Gordon and Lucas Johnson.

Running the Model

To-date we have outsourced the training/predicting to the Cambridge group but their code and tools are all open source. They are willing to field questions related to getting their tools running locally, so that we can iterate and progress on our own.

Early Results

Here is an example of detectreeRGB’s tree crown predictions in suburban Buffalo, NY. Not perfect, but promising!

One problem we identified was the model’s tendency to label water towers and other circular buildings as tree canopy. The Cambridge group recommended simply providing the model with some training images that include water towers (unlabeled of course). They also generically recommended including some training data without trees (e.g. tiles containing open fields, completely urban settings), since we only provided treed tiles.

Some trial runs, executed for us, by the Cambridge team.

People

Some papers

  • O’Neil-Dunne, Jarlath PM, et al. “An object-based system for LiDAR data fusion and feature extraction.” Geocarto International 28.3 (2013): 227-242. doi: 10.1080/10106049.2012.689015
  • O’Neil-Dunne, Jarlath, Sean MacFaden, and Anna Royar. “A versatile, production-oriented approach to high-resolution tree-canopy mapping in urban and suburban landscapes using GEOBIA and data fusion.” Remote sensing 6.12 (2014): 12837-12865. doi: 10.3390/rs61212837
  • Yang, Lin, et al. “Tree detection from aerial imagery.” Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. 2009. doi: 10.1145/1653771.1653792
  • Bosch, Martí. “DetecTree: Tree detection from aerial imagery in Python.” Journal of Open Source Software 5.50 (2020): 2172. doi: 10.21105/joss.02172
  • Jesslyn F. Brown, Heather J. Tollerud, Christopher P. Barber, Qiang Zhou, John L. Dwyer, James E. Vogelmann, Thomas R. Loveland, Curtis E. Woodcock, Stephen V. Stehman, Zhe Zhu, Bruce W. Pengra, Kelcy Smith, Josephine A. Horton, George Xian, Roger F. Auch, Terry L. Sohl, Kristi L. Sayler, Alisa L. Gallant, Daniel Zelenak, Ryan R. Reker, and Jennifer Rover. Lessons learned implementing an operational continuous united states national land change monitoring capability: The land change monitoring, assessment, and projection (lcmap) approach. Remote Sensing of Environment, 238: 111356, 2020. ISSN 0034-4257. doi: 10.1016/j.rse.2019.111356.