Vegetation has a tremendous influence on snow processes
and snowpack dynamics, yet remote sensing techniques to resolve the spatial
variability of sub-canopy snow depth are not always available and are
difficult from space-based platforms. Unmanned aerial vehicles (UAVs) have
had recent widespread application to capture high-resolution information on
snow processes and are herein applied to the sub-canopy snow depth
challenge. Previous demonstrations of snow depth mapping with UAV structure
from motion (SfM) and airborne lidar have focussed on non-vegetated surfaces
or reported large errors in the presence of vegetation. In contrast,
UAV-lidar systems have high-density point clouds and measure returns from a
wide range of scan angles, increasing the likelihood of successfully sensing
the sub-canopy snow depth. The effectiveness of UAV lidar and UAV SfM in
mapping snow depth in both open and forested terrain was tested in a 2019
field campaign at the Canadian Rockies Hydrological Observatory, Alberta, and
at Canadian prairie sites near Saskatoon, Saskatchewan, Canada. Only
UAV lidar could successfully measure the sub-canopy snow surface with
reliable sub-canopy point coverage and consistent error metrics
(root mean square error (RMSE)
Snow accumulation and melt are critical parts of the hydrological cycle in
cold regions (King et al., 2008). To understand these processes, there needs
to be robust and accurate observation methodologies to measure the depth and
density of a snowpack and its change across all aspects of the landscape.
Unfortunately, satellite remote sensing methods struggle to quantify the
spatial distribution of snow at a high enough resolution and accuracy to
account for the fine-scale interactions between snow and vegetation (Nolin,
2010). Remote sensing conceptually promises the capability to gather this
type of data at the spatial scales and extents needed, but the main
challenge for snow observations across a heterogeneous landscape is that
exposed vegetation and forests obscure the underlying snow surface (Bhardwaj
et al., 2016; Nolin, 2010; Tinkham et al., 2014). This paper seeks to
illuminate some of the challenges posed to unmanned aerial vehicle (UAV) based remote sensing of snow
depth observations and how UAV lidar represents a promising
opportunity to overcome this limitation at the small catchment scale
(
Capturing the spatial distribution of snowpacks and snow cover at a particular instance provides information about the integrated accumulation and ablation processes up to that point in time. Accurate quantification of snow accumulation and ablation is needed to improve the understanding of snow hydrology, test processes, examine the spatial scaling of process interactions (Clark et al., 2011; Deems et al., 2006; Trujillo et al., 2007), and to initialise and/or validate model predictions (Hedrick et al., 2018). Snow depth, the focus of this paper, is not the variable of ultimate interest for hydrology. Rather, snow water equivalent (SWE) is used for snow hydrology applications (Pomeroy and Gray, 1995). Fully cognisant of this, the focus here is on snow depth, as it is well documented that snow depth varies much more than density (Pomeroy and Gray, 1995; Shook and Gray, 1996; Jonas et al., 2009; López-Moreno et al., 2013); therefore, improving the accuracy of snow depth observations in a drainage basin is critical to improving the estimation of SWE at and within basin scales.
Snow depth and SWE observations are traditionally collected through in situ observations (Goodison et al., 1987; Helms et al., 2008; Kinar and Pomeroy, 2015a; Sturm, 2015). In situ approaches, such as snow surveying, rely on manual sampling of snow depths and densities to get SWE. When conducted along landscape-stratified transects, the landscape-scale SWE can be estimated (Pomeroy and Gray, 1995; Steppuhn and Dyck, 1974). The challenge for snow survey observations is that they are prone to observer bias, are labour-intensive and time-consuming, and are often unable to sample all aspects of a landscape such as avalanche zones (Kinar and Pomeroy, 2015a). Nonetheless, snow surveying is a proven approach to quantify SWE and has been operationalised across many regions. The practice has historical precedence and has created many long-term records which are a valuable data source (Goodison et al., 1987; Helms et al., 2008). Other point observations, such as snow pillows (Coles et al., 1985), acoustic sensors (Kinar and Pomeroy, 2009, 2015b), and passive gamma sensors (Smith et al., 2017), are valuable automated data sources but are spatially limited in extent and can often suffer from location/elevation bias – as demonstrated by the SNOTEL network in the western United States (Molotch and Bales, 2006). In particular, measurements of snow in forest clearings will have relatively more snow than under the adjacent canopy (Pomeroy and Gray, 1995), and so they may not be suitable for snow hydrology calculations or model validations in forested regions even though they are often used for just such purposes. Other techniques need to be developed to capture the small-scale spatial variability of snow–vegetation interactions to advance our process understandings and validate the next generation of distributed snow models.
Remote sensing approaches have shown promise in evaluating snow depth in open areas. Airborne-lidar and UAV structure-from-motion (SfM) approaches have been proven to provide snow depth mapping abilities when differencing snow-covered (hereafter snow) and snow-free (hereafter ground) digital elevation models (DEMs). Lidar, an active sensor, emits a pulse of light, and the detection of the reflected pulse results in a point cloud of a scene with a consistent quality point cloud regardless of flight characteristics, wind conditions, or solar illumination. A clear benefit of lidar is that multiple returns per pulse can be observed with returns possible from within the canopy and from the sub-canopy ground or snow surface. In contrast UAV SfM uses a passive RGB sensor where data quality is not actively controlled. This results in variable image quality because inconsistent solar illumination influences image exposure, wind gusts influence platform stability leading to blurry images and inconsistent overlap, and surface heterogeneity means that some areas of the domain will have more key points – points automatically detected and matched in multiple images (Westoby et al., 2012) – leading to variability in the quality of the SfM solution (Bühler et al., 2016; Harder et al., 2016; Meyer and Skiles, 2019). So while SfM can provide similar quality error metrics in open areas, the quality will vary between flights as conditions change, whereas lidar will be more consistent. Reported snow depth accuracy in open environments, expressed as root mean square errors (RMSEs), varies from 0.08 to 0.60 m for airborne lidar (Currier et al., 2019; DeBeer and Pomeroy, 2010; Harpold et al., 2014; Mazzotti et al., 2019; Painter et al., 2016; Tinkham et al., 2014), 0.17 to 0.30 m for airborne SfM (Bühler et al., 2015; Meyer and Skiles, 2019; Nolan et al., 2015), and 0.02 to 0.30 m for UAV SfM (Harder et al., 2016; Vander Jagt et al., 2015; De Michele et al., 2016). A notable challenge is that the presence of exposed vegetation, especially dense forest, confounds SfM solutions and obscures airborne-lidar bare-surface extractions which are needed for fine-scale differencing of DEMs to evaluate snow depths or snow depth changes (Bhardwaj et al., 2016; Deems et al., 2013; Harpold et al., 2014). Terrestrial laser scanning (TLS) is another approach for observing high-resolution snow depth data which has been used to develop an understanding of snow depth distributions and for validating other snow depth observation methods (Currier et al., 2019; Egli et al., 2012; Grünewald et al., 2010; Mott et al., 2011). However, TLS has important limitations that restrict further landscape-scale understanding of snow processes in forested areas as it is limited by the site-specific viewshed and viewing geometry (Deems et al., 2013) and occlusion by forest canopies and low vegetation, which decreases point cloud density away from forest edges (Currier et al., 2019). TLS remains an excellent technique for detailed examination of the forest-edge snow environment.
Most applications of remote sensing for observing snow processes have focussed on open environments. However, vegetated portions of those same environments can play a large role in landscape-scale snow hydrology. For example, wetland vegetation accumulates deep snowdrifts and so has an exaggerated influence on snow accumulation processes in prairie environments (Fang and Pomeroy, 2009). Similarly, forests constitute large fractions of the mountain domain (Callaghan et al., 2011; Troendle, 1983) and have very different snow processes than those found in open environments (Pomeroy et al., 2002). Snow–vegetation interactions are complex (Currier and Lundquist, 2018; Gelfan et al., 2004; Hedstrom and Pomeroy, 1998; Harder et al., 2018; Mazzotti et al., 2019; Musselman et al., 2008; Parviainen and Pomeroy, 2000; Pomeroy et al., 2001; Zheng et al., 2016) and involve both snow interception by the canopy and wind redistribution to forest edges. In dense forests, vegetation leads to the interception and subsequent sublimation of snow, resulting in an overall decrease in accumulation (Hedstrom and Pomeroy, 1998; Parviainen and Pomeroy, 2000; Reba et al., 2012; Swanson et al., 1986). In open environments, such as prairie, tundra, and alpine, wind redistribution of snow leads to a decrease in snow depth in exposed erodible areas and an increase in snow accumulation over aerodynamically rough surfaces or in sheltered areas where wind speeds decrease and snow is deposited – this includes forest edges (Busseau et al., 2017; Essery et al., 1999; Fang and Pomeroy, 2009; Liston and Hiemstra, 2011; Pomeroy et al., 1993; Schmidt, 1982). Much of the understanding of snow–vegetation interactions is based on snow surveys, which are limited in scale and extent. Thus, approaches to systematically and efficiently quantify these dynamics across a drainage basin accounting for topographic and vegetative heterogeneity are needed to further develop and test our process understandings.
The overall motivation of this work is to understand how snow depth, as well as the processes driving its accumulation and ablation, varies across complex vegetated landscapes. Better tools are needed to measure snow at scales that resolve snow–vegetation interactions, which can involve individual trees and small forest gaps. So the specific objectives in this paper are (1) to evaluate the ability of UAV-lidar versus UAV-SfM techniques for measuring snow depth in open and vegetated areas and (2) to articulate challenges and opportunities for UAVs to map sub-canopy snow depth.
Several sites from western Canada, which represent a range of surface conditions and snow climates, were selected to test the ability of UAV lidar and UAV SfM to measure snow depth in open and vegetated areas.
Fortress Mountain Snow Laboratory (hereafter Fortress), in Kananaskis, Alberta
(50.833
Two study areas in the Canadian Prairies were examined in this study. Both
sites provide examples of cropland with hummocky terrain subject to
significant blowing snow redistribution (Fig. 1b, c). Windblown snow from
upland areas of short vegetation, wheat and barley stubble, is often
transported to lower-elevation wetland depressions where it is effectively
trapped by wetland vegetation; shrub vegetation types include willows,
dogwoods, tall grasses, and reeds while the trees are primarily poplar and
willow. One site was located southeast of Saskatoon, Saskatchewan (51.941
The UAV-lidar system was comprised of a RIEGL miniVUX-1UAV lidar sensor,
integrated with an Applanix APX-20 inertial measurement unit (IMU) and
mounted on a DJI M600 Pro UAV platform (Fig. 2a). The miniVUX-1UAV
utilises a rotating mirror to provide a 360-degree line scan with a
measurement rate of 100 KHz and up to five returns per shot with a 15 mm
precision. The APX-20 provides positional accuracy of
Coincident surface mapping with SfM used imagery collected by eBee X or
eBee Plus fixed-wing UAV platforms with S.O.D.A. RGB cameras from senseFly (Fig. 2b). The longer flight times, up to 70 min, associated with a
lightweight payload on a fixed-wing platform allowed for the efficient mapping
of large areas. Overlap parameters were generally 80 % for the
longitudinal and 65 % for the lateral axes. Flight altitudes of 120 m above
the surface provided a ground sample distance of 2.8 cm with the S.O.D.A.
camera, which was used on both eBee X and eBee Plus platforms. The generated
UAV-SfM point clouds have densities of
UAV-lidar platform: RIEGL miniVUX-1UAV mounted on DJI M600 Pro
The assessment of snow depth accuracy used coincident surveys of surface
elevation points with Global Navigation Satellite System (GNSS) surveys and
manual measurements of snow depths with a ruler. The intention of the surveys
was to validate the spatially distributed snow depth retrievals, and
transects were selected in a manner for the surveyor(s) to efficiently
sample the greatest variety of vegetation types and gradients. A Leica GS16
base/rover kit provided a real-time kinematic (RTK) survey solution to
survey points. The 3D uncertainty of the relative position between the base
and rover was computed in real time to be
To assess the accuracy of the UAV snow depth measurement methods, as well as to provide insight into the seasonally evolving snow depth distribution, a total of 19 flight/manual surveys were conducted at all three study sites between September 2018 and April 2019. These are summarised by date, surveyed surface condition, UAV data collected, and the corresponding number of manually surveyed surface elevation points in Table 1.
Summary of data collection campaign, September 2018 to April 2019.
Snow depth was quantified as the vertical difference between a bare-ground
DEM and a bare-snow DEM. This approach was taken regardless of
whether the DEMs came from lidar scanning or SfM processing. The workflows
implemented to produce DEMs varied between lidar and SfM approaches (Fig. 3),
and code is available at
Data processing workflows for lidar and SfM point cloud generation.
To generate a georeferenced lidar point cloud, several data streams need to
be integrated in post-processing. The raw high-frequency trajectory (
The UAV-SfM processing workflow begins with associating a high-accuracy
The points representing the “bare” surface, whether that is the snow or
ground surface, are of interest for snow mapping. Lidar point clouds
comprise returns from vegetation
A DEM was generated in order to reduce the overall volume of data and to allow for simple surface differencing. The “blast2dem” tool within the LAStools package generates a seamless triangulated irregular network (TIN) that conforms to the point cloud which is then resampled to a raster (Isenburg, 2019). A spatial resolution of 0.1 m was applied to all DEMs generated.
To assess the accuracy of UAV lidar and UAV SfM with respect to
observations, a DEM-based comparison was undertaken. Snow and ground surface
values were extracted from the DEM raster cells for locations where a point
was manually surveyed and snow depth measured. The snow depth was calculated
from the vertical difference between the snow DEM and ground DEM. The
influence of vegetation height on snow depth errors was also considered by
segmenting the error metrics with respect to vegetation height (open
Fortress
The continuity of bare-surface point density between UAV-lidar and UAV-SfM
methods was quantified in order to interpret how well the respective tools
can sense sub-canopy surfaces. All surveys with coincident UAV-lidar and
UAV-SfM flights were assessed with the LAStools (Isenburg, 2019)
grid_metrics function to classify an area with
An accuracy assessment comparing the snow depth from UAV-lidar and UAV-SfM
techniques to the manually sampled ground surveys is shown in Fig. 5. UAV lidar has a consistently lower error than UAV SfM in open environments
and mountain vegetation. The exception is prairie shrub vegetation where the
UAV-lidar RMSE is slightly larger than the UAV-SfM RMSE. The significance of the
different relative RMSE values for prairie shrub vegetation is negligible
relative to the much larger differences noted in the other domains.
UAV-lidar bias is consistently negative (
Comparison of snow depth observations from snow probes and snow depth estimates from UAV techniques. Plots are segmented within each vegetation class (rows), site (columns), and observation method (colours).
The influence of vegetation on estimating snow depth from UAVs can be directly assessed by considering the errors associated with different vegetation classes (Fig. 5). When considering UAV lidar, the errors are worse in the presence of vegetation. Open prairie and open Fortress RMSE values are similar (0.09 and 0.1 m RMSE, respectively), while vegetated sites have a larger error (0.13 to 0.17 m RMSE, respectively) with no observed dependency upon vegetation class or type. The sample size of snow depth probe observations is smaller for vegetation sites than open sites, which has implications for error metrics – outliers will have greater weight. The UAV lidar is equally successful at penetrating the open, leaf-off deciduous tree canopy at the prairie sites as the closed, needleleaf canopy at the Fortress site, based on the similar RMSE values within each site's tree vegetation class. The UAV-lidar RMSE for shrub and tree vegetation classes at the Fortress and prairie sites are within 0.04 m. For UAV SfM the errors differ widely for various vegetation covers. The open vegetation has a large RMSE range between sites (0.1 m in prairie and 0.3 m in Fortress) while vegetation class RMSEs range from 0.13 to 0.33 m.
UAV SfM reports slightly better metrics than UAV lidar in the prairie
shrub case: the difference between these techniques is only 0.04 m, which is
within the
Snow depth is estimated from differencing the snow and ground DEMs. Therefore, the uncertainty of the snow depth is a propagation of the error of both the snow and ground DEMs. To distinguish which DEM may contribute more to the snow depth error, the remotely sensed surface elevations were compared to the surface elevations from manual GNSS surveys using boxplots (Fig. 6). The boxplots in Fig. 6 illustrate that the UAV-SfM snow-surface elevations have errors consistently greater than the corresponding UAV-lidar surfaces at Fortress. In the prairie snow-surface case, the median RMSE is consistently lower for UAV SfM than UAV lidar, but the UAV SfM does have more variability in its errors. The ground surface was only available from UAV lidar for this study, so no corresponding UAV-SfM ground surface analysis is available. The snow-free UAV-lidar survey has a consistently higher or more variable RMSE than the snow surfaces (with the exception of the open prairie and open and tree Fortress UAV SfM).
Boxplots of RMSEs of UAV-estimated and RTK-surveyed surface
elevations segmented by surface condition, technique, site, and vegetation
classification. The error metrics approach the
The quality of a remotely sensed snow depth estimate is directly tied to how
much interpolation is required to fill gaps in a point cloud. The point
clouds were classified into areas where
Fortress Ridge (14 February 2019) study site with an example
The predominantly open nature of the prairie sites demonstrates a minimal
difference in point coverage between the UAV-lidar and UAV-SfM techniques.
The average extent of the study domain covered with a point density of
Rosthern (18 March 2019) study site with an example
Clavet (20 March 2019) study site with an example
Snow depth mapping with UAVs has had widespread application in recent years
(Bühler et al., 2016; Harder et al., 2016; Vander Jagt et al., 2015; De
Michele et al., 2016). The emphasis has been on using SfM techniques to
difference DEMs. One of the objectives of this work was to consider the snow
depth accuracies possible with the current state of the art of UAV-SfM
versus UAV-lidar platforms. What has been demonstrated here is that while
there are still errors in UAV lidar (as with any measurement), they are
smaller and more consistent relative to UAV SfM. An unavoidable problem for
all SfM implementations, which is reflected in this work, is that SfM can
only sense the surface – whether that it is the ground/snow surface or the
top of a vegetation canopy (Westoby et al., 2012). This makes it
fundamentally inappropriate for sub-canopy mapping of snow. Sub-canopy snow
depth mapping with UAV SfM therefore becomes an exercise in interpolating
snow depth values observed in open areas without vegetation to areas with
dense vegetation, rather than sensing the actual snow depth under the
canopy. Open areas will have greater snow depths than forest areas (Troendle,
1983; Swanson et al., 1986; Pomeroy et al., 2001; Mazzotti et al., 2019),
meaning UAV-SfM solutions, or any approach which requires interpolation of
point cloud gaps beneath trees, will overestimate snow (Zheng et al., 2016).
The ability of UAV lidar to map snow depths with and without canopy cover
and capture tree wells with a RMSE
The increased continuous point coverage of UAV lidar is the main advantage over UAV SfM when trying to map sub-canopy snow depth. While snow depth accuracy at times can be similar between techniques, the ability of UAV lidar to sense a surface below vegetation is critical to develop a coherent snow-surface DEM. The point cloud cross section illustrated in Fig. 7 emphasises these findings, highlighting the wider gaps in the UAV-SfM point cloud beneath individual trees that require interpolation over longer distances, which results in a greater potential for error. Features such as tree wells, where the snow depth decreases with proximity to a tree due to interception/sublimation losses and radiative melting (Pomeroy and Gray, 1995; Musselman and Pomeroy, 2017), will be missed. An interesting dynamic of the RMSEs is that while lidar is comparable across all the sites and vegetation categories, the UAV-SfM RMSE values are much greater in the mountain domain. This is attributed to interpolation artefacts. In prairies where topography is fairly flat, the interpolation of the few gaps can give a reasonable approximation of the actual surfaces. In contrast, mountainous regions have a much more complex topography, and the interpolation of large gaps misses much of the small-scale topography and snow–vegetation interaction features. Interpolation works better between two points that are on the same plane (prairies) rather than on a complex non-linear slope (mountains), and where gaps in the point cloud are smaller.
The ability of UAV lidar to map sub-canopy snow depth is established by the consistent error metrics reported, as well as the continuous bare-surface point cloud coverage. The dynamics of snow depth at snow and vegetation process-resolving scales can therefore be examined. Two examples are presented here to exemplify analyses that are possible with UAV lidar.
The differences between open and forest snow cover processes can be explored by examining the difference in snow depth between the UAV-lidar scans that took place on 13 February and 25 April 2019 at Fortress. Over this interval, there was intermittent precipitation totaling approximately 100 mm, measured at storage gauges within the study area. The UAV-lidar measured change in snow depth visualises how snow–vegetation interactions translated this snowfall into a snow depth distribution change over a 2 month interval (Fig. 10). In the Fig. 10c cross section, there was accumulation of up to 2 m over the September–April time period on lee slopes, while the upper windswept portions of the ridge demonstrate snow erosion between February and April. The dynamics and extents of blowing snow sources (grey/red) and sinks (blue) are clearly visualised in Fig. 10a, which closely match the findings of Schirmer and Pomeroy (2020), who used SfM for the same study region. In the forest the UAV lidar observed the increasing snow drifts on the tree line (the krummholz and tree islands – blue areas on top of facing slope in Fig. 10a). Within the forested (Fig. 10b) transect, there is a general decline in snow depth from February to April due to melt on a south facing slope (on the left of the figure) and the development of tree wells in the middle of the transect (orange polygons). The Fig. 10b transect demonstrates the lack of wind redistribution in the forest; snow accumulation is consistently observed to be less than precipitation over the transect due to interception losses, while the Fig. 10c transect on the ridgeline demonstrates significant wind redistribution, and snow accumulation on the lee slope greatly exceeds the observed precipitation.
In the prairies, wind redistribution is the main driver of snow depth
spatial variability. Areas of tall vegetation accumulate wind-blown snow
from open upwind sources and are typically associated with the deepest
snowpacks. In the winter of 2019, the chronology of snow, temperature, and
wind events defined the final snow depth distribution (Fig. 11a). The
UAV lidar flown on 13 March captures all of these interactions. Deep snow
drifts are found in the roadside ditches (linear features of 1.5 m snow depth
on the north and northwest corners Fig. 11a), on the edges of wetland
vegetation (
Peak snow depth at the Rosthern site from the UAV-lidar scan on 13 March
2019
Prairie snowpacks are shallow, leading Harder et al. (2016) to conclude that UAV SfM was unable to capture snow ablation patterns as the signal-to-noise ratio in the open domain was too large, and vegetated area errors were not considered. With the demonstrated ability of UAV lidar to consistently map shallow snow in open areas and deep snow in the vegetated areas, this can be reattempted. Consider the difference in snow depth between 18 and 23 March (Fig. 11b), which represents the earliest part of the active melt period in this particular snowmelt season. Two examples of the spatial variability of process interactions can now be visualised at the appropriate resolutions. First, the spatial variability of albedo is a major driver of snowmelt. The greatest melt occurs alongside the gravel-covered “grid” roads in the ditches where road dust significantly lowers the albedo, thereby accelerating the melt of the deep snowpacks. Moving eastward from the road ditches into the open fields, there is a decrease in snowmelt depth in the overall scene, visualised in the Fig. 11c transect. This pattern is likely due to the redistribution of dust from the grid roads to the open-field snow surface by the prevailing westerly winds. A snow-surface dust concentration gradient develops over the winter with higher concentrations of dust, and therefore lower albedo (Woo and Dubreuil, 1985), in the west than the east. This increase in albedo, and therefore decrease in solar radiation available to melt snow, corresponds to a decrease in the snowmelt rate (Fig. 11c), moving easterly away from the grid road. Second, the spatial variability of snowpack cold content influences melt rates in the early part of the melt season. Within the agricultural field, the sastrugi drifts are not melting due to the larger cold content of the deep cold snowdrifts relative to the smaller cold content of the shallower surrounding snowpacks. This is also prevalent in the non-melting deep snowdrifts at the vegetated wetland edges. With UAV lidar, a complete picture of the early and asynchronous snowmelt processes is possible. If reliant on UAV SfM, the interpolation needed to fill gaps in the point cloud, near vegetation, and the tops of the sastrugi will obscure the full spatial pattern of snow depth change that conveys the heterogeneity of ablation processes. The high spatial resolution and vertical accuracy of UAV lidar are required to capture these spatial patterns as the length scales of the snow-surface features of interest are small; i.e. sastrugi drifts are on the metre scale, and their changes at daily timesteps are on the centimetre scale.
The processes visualised in the Fortress and Rosthern examples are not new, but the value of UAV lidar is that spatial patterns and changes can be observed across complex landscapes and vegetation gradients with a consistent resolution and accuracy. UAV lidar will therefore be a powerful tool to understand landscape-scale snow–vegetation interactions, as well as to make a core contribution to the validation and improvement of distributed snow process modelling.
UAV lidar, relative to UAV SfM, provides the ability to measure snow depth
below vegetation canopies, but it does come at a higher cost and logistical
complexity. There are many similarities between the approaches, and one
commonality is that both UAV lidar and UAV SfM require access to a GNSS
solution to geolocate point clouds in absolute space. The Leica GS16 package
used here is on the expensive side of the spectrum (CAD 70 000), and
cheaper equipment, subscription to virtual reference station networks if
available in the study area (requires only a rover and not a base station),
and equipment rentals are all viable alternatives to lower costs. The main
cost difference between UAV-lidar and UAV-SfM platforms is therefore in
terms of the UAV sensor payload. A plethora of UAV-SfM options with and
without RTK or PPK photo geotagging are available and can range from small
inexpensive systems like consumer grade UAVs (DJI Phantom 3 < CAD 2000) to more expensive options like the senseFly eBee X PPK system
(CAD 30 000) used here. Current integrated lidar systems suited to UAV
snow mapping (laser wavelengths
The ability of UAV lidar to resolve sub-canopy snow depths is not without challenges. Precise classification of surface points from snow and ground scans are needed to resolve the snow depth at the resolutions needed to confidently capture snow–vegetation interactions. Where there are dense shrubs, the last returns will not necessarily be the snow or ground surface, and therefore last-return methods common to airborne applications will not be appropriate. Sub-canopy snow depth mapping requires careful selection of the appropriate point cloud classification and filtering tools and associated parameters to be able to reliably detect the sub-canopy bare surface and achieve the desired quality and precision in a final point cloud. To preserve the small-scale surface variability, point cloud processing will be less efficient as all points need consideration, and the focus on small-scale features will at times lead to erroneous inclusion of points representing large-scale non-surface objects. The algorithm and parameter decisions also have to be adjusted for each flight and site/environment for UAV SfM due to the variable quality and noise of the generated point cloud.
An especially challenging feature in resolving a ground surface is the presence of low and dense vegetation such as shrubs and wetland reeds. This is evident in looking in the centre of the wetland zones (red polygons) of Fig. 11a where there are negative snow depths calculated. In this case, the lidar pulses cannot penetrate the dense vegetation to the underlying ground surface, and the classified bare-ground points have a positive bias. As snow accumulates, the reeds compress and shrubs bend over to the extent that the corresponding snow surface is below the biased bare-ground surface. In the examples presented above, the areas of negative snow are limited to areas where snow depth is relatively shallow in comparison to the deep snow on the wetland edges. This challenge might also be apparent in other regions such as the Arctic tundra, where shrub bending and burial by snow have been extensively documented (Pomeroy et al., 2006; Sturm et al., 2005). While shrubs are much sparser than wetland reeds, their dynamic change in height and potential to positively bias the ground surface extraction will increase uncertainty of snow depth estimation in these hydrologically significant snow accumulation areas. More powerful lasers and higher scan rates may be able to increase point cloud density and penetration to the ground surface, but current sensors with these characteristics may exceed the payload capacities of most UAV platforms. Advances in bare-surface classification/filtering software tools to address the large noise associated with low and dense vegetation are an obvious avenue of improvement. This avenue is inherently limited, as even a perfect bare-surface extraction algorithm will not identify points at the ground surface if pulses cannot penetrate dense vegetation to the ground surface. The time of year chosen for the ground surface scan, ideally right after snowmelt when vegetation is at its lowest and not growing yet, may minimise errors. Unfortunately, this may not be feasible if the critical wetland areas are inundated as is often the case in the Canadian Prairies in spring.
Mapping sub-canopy snow depth is important, but the ultimate variable of interest is SWE. The challenge is that at snow–vegetation interaction scales there may be significant variability from snow pack densification being driven by different processes across a landscape (Faria et al., 2000). Densification from wind packing is prevalent in open areas versus metamorphic densification due to temperature gradients in sheltered sub-canopy areas (López-Moreno et al., 2013). Current methods of modelling or measuring snow density are not without problems at these small scales. Modelling snow density will impose conceptual understandings of these processes (Raleigh and Small, 2017; Wetlaufer et al., 2016), which may be inappropriate for the small-scale features that need to be represented – these may miss mechanical densification from snow clumps unloading or dripping from the canopy for example. Observational approaches are also a challenge as typical in situ measurements are destructive, limited in extent, and often too limited to develop robust relationships of depth versus density at both the small local and large landscape scales needed (Kinar and Pomeroy, 2015a; Pomeroy and Gray, 1995). Opportunities may be available to pair UAV lidar with other UAV-borne sensors such as passive gamma ray or snow acoustics (Kinar and Pomeroy, 2015b) to non-destructively develop high spatial and temporal resolution estimates of snow density and ultimately the water equivalent.
Remote sensing techniques to determine snow–vegetation interactions have
consistently been challenged by the presence of vegetation. This work
directly considers emerging UAV-lidar and UAV-SfM techniques to address this
gap in observational capacity. Based upon extensive data collection at a
variety of sites and snow conditions with varying snow–vegetation processes,
the ability of UAV lidar to measure sub-canopy snow depth is demonstrated.
UAV lidar provides snow depth estimates with RMSEs
The data underlying this analysis and its documentation are available at
PH designed the field campaigns, performed the data collection, and completed/managed the data processing and analysis. PH, JP, and WH prepared and edited the paper.
The authors declare that they have no conflict of interest.
We gratefully acknowledge the field and data processing assistance from Dong Zhao, Alistair Wallace, Greg Galloway, Robin Heavens, Lindsey Langs, Cob Staines, Andre Bertoncini, and Bosse Sottmann. The support of Fortress Mountain Ski Resort, the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs Program, Canada First Research Excellence Fund, and Western Economic Diversification Canada, a department of the Government of Canada, made this study possible.
This research has been supported by the Natural Sciences and Engineering Research Council of Canada (Discovery Grants Program – Snow Hydrology), the Canada Research Chairs Program (Canada Research Chair in Water Resources and Climate Change grant), the Canada First Research Excellence Fund (Global Water Futures grant), and the Western Economic Diversification Canada (Smart Water Systems Laboratory grant).
This paper was edited by Chris Derksen and reviewed by two anonymous referees.