the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Brief Communication: Initializing RAMMS with High Resolution LiDAR Data for Avalanche Simulations
Abstract. The Rapid Mass Movements Simulator (RAMMS) is an avalanche dynamics software tool for research and forecasting. Since the model’s conception, the sensitivity of inputs on simulation results has been well-documented. Here, we introduce a new method for initializing RAMMS that can be easily operationalized for avalanche forecasting using high resolution LiDAR data. As a demonstration, hypothetical avalanche simulations were performed while incrementally incorporating semi-automated LiDAR-derived values for snow depth, interface topography, and vegetative cover from field-collected LiDAR data. Results show considerable variation in the calculated runout extent, flow volume, pressure, and velocity of the simulated avalanches when incorporating these LiDAR-derived values.
- Preprint
(1933 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on tc-2020-368', Anonymous Referee #1, 22 Jan 2021
This paper discusses the influence of topography inputs on avalanche dynamic simulations performed with the model RAMMS and the possibility of initializing RAMMS (or in general avalanche dynamics models) with LiDAR data.
In particular, the authors highlight the possibility given by LiDAR Scan to provide useful data for getting the sliding surface of an avalanche. This might be the ground surface (DEM) or different sliding surfaces within the snow cover when the LiDAR scanning is periodically made during the winter period.
Before going into more detailed comments, I have two major troubles with this paper.
First trouble is related to the fact that it seems, from what they write, that we know the sliding surface of an avalanche… this is surely not true. The sliding surface continuously changes during the avalanche motion… Models simplify reality and are able to reproduce avalanches, including all their uncertainties through some model parameters. Well calibrated models (such as RAMMS used in this paper) can reproduce extreme events quite well. Instead, as soon as we go into the details of the physics of a snow avalanche, more processes have to be considered and RAMMS Operational is no more performing well in simulating them.
More physically based models should be used... the SLF itself is developing RAMMS-Extended, where more physical processes are included, such as erosion for example, which is of fundamental importance, also with regards to the scope of your study. To get the sliding surface of an avalanche, the process of snow erosion should be considered.
Therefore, I think that this work is a good exercise to show how the results of RAMMS depend on the topography inputs (still with some weaknesses – see next point), but cannot be suggested for avalanche forecasting and mitigation efforts, as the authors state.
The second trouble (but actually less important than the first one) is related to the fact that the extent of the LiDAR data does not cover the extension of the simulated avalanches. Therefore, the authors needed to force the model parameters as they write at lines 93-96. Even if using a more physically-based model, I think it is necessary to get the whole extension of the avalanche basin, in order to be able to make reasonable sensitivity analysis with respect to the topography inputs.
Citation: https://doi.org/10.5194/tc-2020-368-RC1 -
AC1: 'Reply on RC1', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #1 for their thoughtful comments and careful review of our manuscript.
Regarding the first point of concern, we agree that processes such as snow erosion and entrainment can significantly alter the sliding surface topography within the track of the avalanche, as has been shown with in situ and remote measurements from previous avalanche dynamics studies (Sovilla and Bartelt, 2002, Sovilla et al. 2006, and others). However, evaluating how well RAMMS-Operational or -Extended handle the mechanisms of snow entrainment is not within the scope of our study. Rather, our focus is on demonstrating how the incorporation of high spatial and temporal resolution model inputs can better represent the initial conditions of the sliding surface, as well as variable snow depth and vegetation distribution. A snow surface in the track and runout zone – the initial conditions of the sliding surface at the time of avalanche onset - is often quite different topographically from the ground surface beneath due to variable snow accumulation and redistribution. We intuitively note that a better representation of those initial conditions is useful and a relevant consideration given RAMMS’ sensitivity to such inputs. This is in support of previous work (Buhler et al. 2011), where higher-spatial resolution DEMs were suggested, as it was noted that coarser resolution DEMs failed to represent complex topography in release zones and avalanche tracks. That said, because this article is only a brief presentation of a novel initialization methodology and subsequent sensitivity analysis, you are correct in that we are not able to conclude at this time that initializing RAMMS with high resolution LiDAR data significantly improves its results in an operational setting. In order to study this, several back-calculations of well-documented avalanches would be required. We will make this clear in our revised manuscript.
With regards the second point of concern, in our inability to observe the full runout extent of the avalanche, we agree that this is an unfortunate byproduct of our study site and scanning location. In future work, we suggest the utilization of an observable runout area for relevant avalanche sizes when selecting sites and scanning locations. However, because of the consistent manner in which friction coefficient values were increased equally across all of our simulations, we maintain that any difference in runout extent between simulations is due to the outlined variability of model inputs and/or the influence of those inputs on the spatial distribution of friction coefficient values in our study area. Thus, we found this to be a reasonable, although perhaps not ideal, solution for the incorporation of runout extent into our sensitivity analysis.
Citation: https://doi.org/10.5194/tc-2020-368-AC1
-
AC1: 'Reply on RC1', James Dillon, 19 Feb 2021
-
RC2: 'Comment on tc-2020-368', Anonymous Referee #2, 27 Jan 2021
The Contribution "Brief Communication: Initializing RAMMS with High Resolution LiDAR Data for Avalanche Simulations" by James Dillon and Kevin Hammonds tackles a common problem of how input data influences modeling results in an operational avalanche simulation setting. The authors use a case study supported by LiDar data obtained at Yellowstone Club Ski Resort (YC), near Big Sky, MT, USA to demonstrate the variability of simulation results with respect to changes in the input data for four scenarios (ground DEM, snow covered DEM, snow covered DEM with variable release depth, snow covered DEM with variable release depth and LiDar supported vegetation delineation) and two test cases (~size D2 avalanches). The results show that certain simulation outcomes (e.g. flow depth and velocity) significantly change with respect to the different input scenarios – however the discussion on the source of these variations remains half-assessed.
Generally the paper is well written; some technical terms, description of methods (e.g. definition of runout, choice of parameter) as well as results and discussion require refinement.
All in all the study is interesting considering the aspect that it highlights the sensitivity of the simulation results and that a careful treatment of a simulation tool is required to distinguish different types of uncertainty sources - which is a valuable message for model users.
Short / technical comments:
- p1 l21: “common DEM” – what does common refer to?
- p1 l25: does the “snow-snow” interface automatically correspond to the sliding layer – what about different avalanche types?
- p2 l30: I am not sure you can refer to “more realistic” in that sense – it could be “more accurate” if a case study with documented observations would be available.
- p2 l38: what kind of “error”
- p3 l70: I very much appreciated the idea of data availability but was unable to open the corresponding link: “https://doi.org/10.5061/dryad.z8w9ghx9z”
- p3 Fig 1: could you indicate where the U and L test cases are located?
- p4 l 82 & 97: could you specify what “traditional” means in this context and which “automatically generated values” you refer to? Please comment on how vegetation is treated in terms of friction coefficients (scenario 4)?
- p4 l 82-95: The scenarios need to be more clearly defined. To me it remains unclear what impact on the simulation input the delineation of vegetation has (different friction parameters in this areas? Which ones?), see also major comments below.
- p5 l 106: How do you define and measure runout (please check more recent literature on model evaluation)? Which simulation results do you use? What is your reference and how is it measured?
- p6 l119: Is this due to the redistribution of the starting mass or due to changes in topography (or changes in friction coefficients (which ones?))? What are the differences between your test cases?
- pt Fig 3: What does “final” deposition height refer to – the flow depth at the last (which?) time step – to my knowledge the underlying flow model does not consider deposition?
- p7 l133: I think that it is only possible to show that change in input data implies a “dramatic change” in the simulation results. A dramatic “improvement” would only be possible if you compare to observation data.
Major comments:
- The fact that the outline of the LiDar DEM is insufficient to cover the whole avalanche path requires an ad hoc assumption, increasing the coulomb friction values by and arbitrary value of 0.4 to achieve “visible” runouts. This fact makes it impossible to properly interpret the simulation results with “automatic parameters” and very difficult to judge the “runout” results for the increased friction values (e.g. due to the fact that the avalanche may not even reach flatter terrain where “runout” differences may be way larger).
- I am not sure the authors succeed in identifying the source for the result variability (please see also recent publications on bayes methods for parameter / uncertainty handling in avalanche simulations for more information) for two major reasons:
1) the relation of changes in topography (or vegetation delineation) and friction coefficients with respect to result variability: Using “automated” friction coefficients infers a friction coefficient-dependence on the topography. Therefore not the topography directly (sliding surface from LiDAR) but the related friction coefficients may govern the result changes. These friction coefficients are optimized values and usually determined with respect to a specific spatial resolution and largely depend on the topography (usually something like 5m “winterly smoothed” terrain, the “automated” coefficients change not only with different topography (due to changes in curvature/slope etc) , but already with changing spatial resolution, since e.g. the magnitude of curvature is largely related to the spatial resolution). A similar question arises for the delineation of the vegetation: what does actually change in the delineated areas (friction coefficients?), to which degree does it change, and how do the manual and LiDAR delineations compare in terms of e.g. total area?
2) numerical rather than flow model based constraints: Changes observed between scenario 1-2 may either be related to the change in the sliding surface, the associates (automated) friction coefficient changes (see comment above) or even numerical reasons such as the utilized stopping criterion. For example Figure 3 refers to the final deposition height (or rather the spatial extend of flow height in the last time step?; see comment above). Usually this would refer to the time step where the stopping criterion is met (p. 4 l. 93: 5% or 10% momentum threshold respectively) and could therefore be different for each scenario/simulation (the differences in max. velocity and flow depth indicate that also the momentum threshold and therefore the final time step may be completely different for each scenario/simulation?). Thus the observed result changes would rather be a “numerical artifact” than an influence of the changes in release mass/topography/vegetation (or respective friction parameters). I therefore suspect that you are rather looking at a “numerical” variation rather then a “input data” or “flow model/input parameter” based one.
Relevant Literature
Heredia, María Belén, Eckert, Nicolas, Prieur, Clémentine, Thibert, Emmanuel: Bayesian calibration of an avalanche model from autocorrelated measurements along the flow: application to velocities extracted from photogrammetric images , Journal of Glaciology 66(257), Cambridge University Press, 373–385, 2020
Fischer, Jan-Thomas, Kofler, Andreas, Huber, Andreas, Fellin, Wolfgang, Mergili, Martin, Oberguggenberger, Michael: Bayesian Inference in Snow Avalanche Simulation with r. avaflow , Geosciences 10(5), Multidisciplinary Digital Publishing Institute, 191, 2020
Valero, Cesar Vera, Wever, Nander, Bühler, Yves, Stoffel, Lukas, Margreth, Stefan, Bartelt, Perry: Modelling wet snow avalanche runout to assess road safety at a high-altitude mine in the central Andes , Natural Hazards and Earth System Sciences 16(11), Copernicus GmbH, 2303, 2016
Citation: https://doi.org/10.5194/tc-2020-368-RC2 -
AC2: 'Reply on RC2', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #2 for their thoughtful comments and careful review of our manuscript.
Per your first major comment, we agree that the LiDAR field-of-view being unable to capture the entirety of the runout zone is an unfortunate byproduct of our study site and scanning location (also see reply to Referee #1, paragraph 2). Though this limitation required an ad hoc adjustment of friction coefficient values to limit runout extent, because it was consistently applied throughout all simulations, we maintain that it supports the central point of the communication: that runout distance and area, as well as all other modeled outputs (max velocity, flow height, etc.), can vary considerably when LiDAR data is incrementally incorporated for initialization. We also agree that it’s likely the extent would have varied even more between simulations had the slides been allowed to reach flatter terrain, which we believe further highlights the need for additional investigations on the merit of LiDAR data's use in avalanche dynamics modeling. While the magnitude and nature of this variability may differ between avalanche locations and scenarios, we found the sensitivity observed in our study notable, and worth sharing with the broader avalanche science and remote sensing community as a Brief Communication.
With regards to your second major comment, related to sources of variability, these are both insightful critiques, which we address below:
1) You are correct that in RAMMS, friction coefficients vary spatially with topography, and thus altering the sliding layer changes not only the topographic input, but also the dependent friction coefficient distribution, both of which impact simulation results. We believe this to be a relevant consideration, but beyond the scope of our study. In the work presented, we are only focused on investigating the sensitivity of an avalanche dynamics model (RAMMS) to LiDAR-derived inputs for initialization, as opposed to using generic inputs and assumptions. Therefore, we do not seek to quantify, investigate, or pontificate on the strengths and weaknesses of the model itself. For example, when using RAMMS, a different DEM might sometimes have a greater effect on results via friction coefficient distributions (associates, as you say), while other times the varied topography itself may play a larger role. It is not our intent to distinguish between these two, but rather to show that holding all other variables equal, the incorporation of LiDAR-derived inputs can produce drastically different simulation results. The same can be said of vegetation delineation and its influence on simulation results. In RAMMS, areas designated as vegetated have a scalar value added to what the unvegetated friction coefficient would have been, based on the local topography. Therefore, in the case of varied vegetation delineation (between manually identified, LiDAR inputs, etc.) the mechanism leading to differing simulation outputs is entirely related to altered friction coefficients by RAMMS, but this is an inherent property of the model, and therefore also not within the scope of our study. Based on our results, we suggest that regardless of how a model handles vegetated areas or topographic inputs for flow dynamics simulations, the sensitivity of the model to variations in its initialization must always be a primary consideration; particularly if the model is ever to be used as an operational tool.
2) With regards to numerical artifacts, though an interesting and important consideration, we contend this was not a major discrepancy in the cases presented here. First, at no point were we comparing between simulation sets with automated vs. increased friction values, nor across site locations; we were only comparing simulation cases to themselves when LiDAR data was incrementally incorporated (see Table 1). Second, we have no control over the time-domain of RAMMS. Intuitively, RAMMS will produce larger and faster avalanches with a larger mass and steeper slopes. This was especially the case in the ‘upper’ ridge site location, where max pressure, velocity, and height were all reached at the pinch points of the pairing hourglass features at mid-slope. In this case, the time step at which these maxima were achieved is not a relevant consideration, and we therefore maintain our position that differences in these maximum values are solely due to differences in the simulation inputs. Furthermore, in simulations with automated friction coefficients, stopping criterion (5% momentum threshold) were met when enough mass exited the field-of-view to drop the simulated momentum below the threshold, so any variation in the time step of that occurrence (which was minimal) wouldn’t impact maxima values recorded mid-simulation. Even if the entire field-of-view was observable and the stopping criterion were lowered to a 0% momentum threshold, debris at this point was spreading and decelerating, and we find it very unlikely that new maxima in any output variable would be recorded. Similarly, in simulations where friction coefficients were manually increased, the stopping criteria was met long after movement of the debris flow front had largely plateaued (see simulation .gifs), and thus differing final time steps would not have played a significant role in the final debris extent at simulation completion. For these reasons, we respectfully refute the suggestion that numerical artifacts played a significant role in our results and analysis.
Responses to your minor/technical comments:
- p1 l21: “common DEM” – what does common refer to?
In this case we are referring to an off-snow DEM of the ground surface. We will revise this line in the edited manuscript.
- p1 l25: does the “snow-snow” interface automatically correspond to the sliding layer – what about different avalanche types?
The snow-snow interface does intuitively correspond to the sliding layer within a start zone. In the transition and runout zones, the top snow surface represents the initial condition of the sliding layer, though as we discuss in our responses to Referees #1 and #3, this surface changes during the avalanche event due to erosion and entrainment. The point is, either is a better representation of the sliding surface initial conditions when compared to a ground DEM. Please see paragraph 1 from our response to Referee # 1.
- p2 l30: I am not sure you can refer to “more realistic” in that sense – it could be “more accurate” if a case study with documented observations would be available.
You are correct. We meant to convey that the LiDAR inputs better represent sliding topography, relevant vegetation, and spatially variable snow depth relative to traditional RAMMS initialization. The operational capacity of these inputs and their potential to improve RAMMS simulation results has yet to be verified (see paragraph 2 of our response to Referee #1). We will correct this line in our revised manuscript.
- p2 l38: what kind of “error”
Using a point measurement of snow depth to an interface of concern rather than accounting for spatial variability in snow depth atop the interface will inevitably result in a less precise estimate of release volume/mass. We will incorporate this context into that section.
- p3 l70: I very much appreciated the idea of data availability but was unable to open the corresponding link: https://doi.org/10.5061/dryad.z8w9ghx9z
The DOI is to be made available upon publication. In the interim, you may access the data assets at this link: https://datadryad.org/stash/share/W4ZoRKMTNE6QIXBJdDFdB_NQZpyxm230EbxR2fycNOU.
- p3 Fig 1: could you indicate where the U and L test cases are located?
Yes, we will add outlines of the U and L start zone boundaries to Figure 1 in the revised manuscript.
- p4 l 82 & 97: could you specify what “traditional” means in this context and which “automatically generated values” you refer to? Please comment on how vegetation is treated in terms of friction coefficients (scenario 4)?
In line 82 the sentence continues to elaborate on what we refer to as traditional: ”…“traditional” inputs were used, assigning a ground DEM as the sliding layer, a uniform snow depth across the entire start-zone, and a vegetated area delineated manually from photographs.” There is an automated process in RAMMS that computes the spatial distribution of friction coefficients based on topography. In areas delineated as vegetated, a scalar value is added to account for roughness by increasing friction. We will add this description to the revised manuscript.
- p4 l 82-95: The scenarios need to be more clearly defined. To me it remains unclear what impact on the simulation input the delineation of vegetation has (different friction parameters in this areas? Which ones?), see also major comments below.
Altering the sliding surface DEM changes both the sliding topography and the spatial distribution of topography-dependent friction coefficients. Accounting for spatial variability in snow depth atop an interface alters the release volume/mass. to vegetated areas, and thus variation in vegetation delineation alters the spatial distribution of friction coefficients. We will better describe how the incorporation of LiDAR data at each step is dealt with within RAMMS in the revised manuscript.
- p5 l 106: How do you define and measure runout (please check more recent literature on model evaluation)? Which simulation results do you use? What is your reference and how is it measured?
Runout was defined as the linear distance (when viewed aerially) from the stauchwall of the release area to the furthest extent of avalanche debris at the final time step in the simulation. We will state this definition in the revised manuscript.
- p6 l119: Is this due to the redistribution of the starting mass or due to changes in topography (or changes in friction coefficients (which ones?))? What are the differences between your test cases?
Presumably a combination of changes in the topography as well as changes in the spatial distribution of friction coefficients dependent on the changed topography. Parsing the individual contributions of these is beyond the scope of our preliminary study. Also, see the answer above regarding lines 82-95.
- pt Fig 3: What does “final” deposition height refer to – the flow depth at the last (which?) time step – to my knowledge the underlying flow model does not consider deposition?
Correct; we were referring to the flow depth at the final time step when the momentum threshold stopping criterion were met. We will state this in the revised manuscript.
- p7 l133: I think that it is only possible to show that change in input data implies a “dramatic change” in the simulation results. A dramatic “improvement” would only be possible if you compare to observation data.
We agree, and will revise the wording of that sentence in the updated manuscript.
Citation: https://doi.org/10.5194/tc-2020-368-AC2
-
AC2: 'Reply on RC2', James Dillon, 19 Feb 2021
-
RC3: 'Comment on tc-2020-368', Anonymous Referee #3, 28 Jan 2021
This contribution tackles an in general very interesting topic for a broad modelling community within cryospheric sciences: How does model input data influence model results? This question is worth investigating. The authors are interested in taking this question a step further by trying to imply that critical model input and the analysis of it has operational value. I think this is also an interesting problem that has impacts worth publishing. Unfortunately, there are major shortcomings in this paper of technical nature that need to be addressed:
- The LiDAR data does not fully cover the entire extent of the simulated avalanches: I think this critically limits your discussion and conclusion of the data since you have to introduce assumptions about the coulomb friction values by arbitrarily increasing them to achieve runouts that are visible to you. Any sort of interpretation falls short thereafter and must be seen in the light of this short coming. This makes me wonder if you succeed at all in teasing out the differences in your model runs using different snow / ground surfaces as input. Therefore, I am also unsure if you can address the operational value of your proposed method.
Asking for major revision on the technical part is likely not possible for you to achieve since you would have to acquire a new LiDAR dataset. So, I am unsure how you could address this issue in a sensible way, and I am interested in your response.
- I am not familiar with how RAMMS handles erosion into the snow when the simulated avalanche is in motion. However, in reality I would except the sliding surface to change constantly which is likely not resolved in RAMMS for good reasons and from my practical experience, RAMMS models avalanches runout (at least for large events) quit well. As a result I am unsure about your discussion on the dynamics of the simulated avalanches and their interaction with terrain and vegetation.
In general, the paper is well written and contains a good overview and introduction to the problem at hand and the field of avalanche modelling. I am not an English native speaker, so I am not commenting on language and writing style.
However, I believe that the presentation of the LiDAR data and the simulation data (eg. Fig 1,2,3) could be sharpened up by adding more topographic context, a scale bar, a larger better visible and distinguishable color bar (without rainbow colors).
I would also appreciate more details on the LiDAR scans in terms of georeferencing, co-registration, spatial resolution, which LiDAR scanner, which frequency, software etc.
Citation: https://doi.org/10.5194/tc-2020-368-RC3 -
AC3: 'Reply on RC3', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #3 for their thoughtful comments and careful review of our manuscript.
Per your first major comment, we agree that the LiDAR field-of-view being unable to capture the entirety of the runout zone is an unfortunate byproduct of our study site and scanning location, however, we contend that the ad hoc friction coefficient adjustment was a suitable solution for the purposes of our study (see reply to Referees #1 and #2, lines 2 and 1, respectively). Furthermore, regarding operational value of initialization via LiDAR, we again refer you to our reply to Referee #1, paragraph 1. We maintain our position that RAMMS, or any other avalanche dynamics model, would most likely only benefit from the incorporation of high spatial and temporal resolution model inputs, whether derived from LiDAR or any other high spatial resolution remote sensing system (e.g., SAR or SfM). Although we are unable to conclusively show this at this time, relative to operational improvements, once additional data of other more well-documented avalanches becomes available, this will be able to be more thoroughly investigated.
Regarding your 2nd major comment, we again refer you to our response to Referee #1 (paragraph 1), who registered a similar concern on snow erosion. We agree that processes such as snow erosion and entrainment can significantly alter the sliding surface topography within the track of the avalanche. That said, the snow surface in an avalanche track and runout zone –the initial conditions of the sliding surface at the time of avalanche onset - is often quite different topographically from the ground surface beneath due to variable snow accumulation and redistribution. A better representation of those initial conditions is useful and a relevant consideration given RAMMS’ sensitivity to such inputs. This is in support of previous work (Buhler et al. 2011), where higher-spatial resolution DEMs were suggested, as it was noted that coarser resolution DEMs failed to represent complex topography in release and transition zones.
Your recommendations on how to improve the figures are appreciated. We will incorporate your input on these figures as well as additional technical details on georegistration and scanning specifications into our revised manuscript, per your comments.
Citation: https://doi.org/10.5194/tc-2020-368-AC3
Status: closed
-
RC1: 'Comment on tc-2020-368', Anonymous Referee #1, 22 Jan 2021
This paper discusses the influence of topography inputs on avalanche dynamic simulations performed with the model RAMMS and the possibility of initializing RAMMS (or in general avalanche dynamics models) with LiDAR data.
In particular, the authors highlight the possibility given by LiDAR Scan to provide useful data for getting the sliding surface of an avalanche. This might be the ground surface (DEM) or different sliding surfaces within the snow cover when the LiDAR scanning is periodically made during the winter period.
Before going into more detailed comments, I have two major troubles with this paper.
First trouble is related to the fact that it seems, from what they write, that we know the sliding surface of an avalanche… this is surely not true. The sliding surface continuously changes during the avalanche motion… Models simplify reality and are able to reproduce avalanches, including all their uncertainties through some model parameters. Well calibrated models (such as RAMMS used in this paper) can reproduce extreme events quite well. Instead, as soon as we go into the details of the physics of a snow avalanche, more processes have to be considered and RAMMS Operational is no more performing well in simulating them.
More physically based models should be used... the SLF itself is developing RAMMS-Extended, where more physical processes are included, such as erosion for example, which is of fundamental importance, also with regards to the scope of your study. To get the sliding surface of an avalanche, the process of snow erosion should be considered.
Therefore, I think that this work is a good exercise to show how the results of RAMMS depend on the topography inputs (still with some weaknesses – see next point), but cannot be suggested for avalanche forecasting and mitigation efforts, as the authors state.
The second trouble (but actually less important than the first one) is related to the fact that the extent of the LiDAR data does not cover the extension of the simulated avalanches. Therefore, the authors needed to force the model parameters as they write at lines 93-96. Even if using a more physically-based model, I think it is necessary to get the whole extension of the avalanche basin, in order to be able to make reasonable sensitivity analysis with respect to the topography inputs.
Citation: https://doi.org/10.5194/tc-2020-368-RC1 -
AC1: 'Reply on RC1', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #1 for their thoughtful comments and careful review of our manuscript.
Regarding the first point of concern, we agree that processes such as snow erosion and entrainment can significantly alter the sliding surface topography within the track of the avalanche, as has been shown with in situ and remote measurements from previous avalanche dynamics studies (Sovilla and Bartelt, 2002, Sovilla et al. 2006, and others). However, evaluating how well RAMMS-Operational or -Extended handle the mechanisms of snow entrainment is not within the scope of our study. Rather, our focus is on demonstrating how the incorporation of high spatial and temporal resolution model inputs can better represent the initial conditions of the sliding surface, as well as variable snow depth and vegetation distribution. A snow surface in the track and runout zone – the initial conditions of the sliding surface at the time of avalanche onset - is often quite different topographically from the ground surface beneath due to variable snow accumulation and redistribution. We intuitively note that a better representation of those initial conditions is useful and a relevant consideration given RAMMS’ sensitivity to such inputs. This is in support of previous work (Buhler et al. 2011), where higher-spatial resolution DEMs were suggested, as it was noted that coarser resolution DEMs failed to represent complex topography in release zones and avalanche tracks. That said, because this article is only a brief presentation of a novel initialization methodology and subsequent sensitivity analysis, you are correct in that we are not able to conclude at this time that initializing RAMMS with high resolution LiDAR data significantly improves its results in an operational setting. In order to study this, several back-calculations of well-documented avalanches would be required. We will make this clear in our revised manuscript.
With regards the second point of concern, in our inability to observe the full runout extent of the avalanche, we agree that this is an unfortunate byproduct of our study site and scanning location. In future work, we suggest the utilization of an observable runout area for relevant avalanche sizes when selecting sites and scanning locations. However, because of the consistent manner in which friction coefficient values were increased equally across all of our simulations, we maintain that any difference in runout extent between simulations is due to the outlined variability of model inputs and/or the influence of those inputs on the spatial distribution of friction coefficient values in our study area. Thus, we found this to be a reasonable, although perhaps not ideal, solution for the incorporation of runout extent into our sensitivity analysis.
Citation: https://doi.org/10.5194/tc-2020-368-AC1
-
AC1: 'Reply on RC1', James Dillon, 19 Feb 2021
-
RC2: 'Comment on tc-2020-368', Anonymous Referee #2, 27 Jan 2021
The Contribution "Brief Communication: Initializing RAMMS with High Resolution LiDAR Data for Avalanche Simulations" by James Dillon and Kevin Hammonds tackles a common problem of how input data influences modeling results in an operational avalanche simulation setting. The authors use a case study supported by LiDar data obtained at Yellowstone Club Ski Resort (YC), near Big Sky, MT, USA to demonstrate the variability of simulation results with respect to changes in the input data for four scenarios (ground DEM, snow covered DEM, snow covered DEM with variable release depth, snow covered DEM with variable release depth and LiDar supported vegetation delineation) and two test cases (~size D2 avalanches). The results show that certain simulation outcomes (e.g. flow depth and velocity) significantly change with respect to the different input scenarios – however the discussion on the source of these variations remains half-assessed.
Generally the paper is well written; some technical terms, description of methods (e.g. definition of runout, choice of parameter) as well as results and discussion require refinement.
All in all the study is interesting considering the aspect that it highlights the sensitivity of the simulation results and that a careful treatment of a simulation tool is required to distinguish different types of uncertainty sources - which is a valuable message for model users.
Short / technical comments:
- p1 l21: “common DEM” – what does common refer to?
- p1 l25: does the “snow-snow” interface automatically correspond to the sliding layer – what about different avalanche types?
- p2 l30: I am not sure you can refer to “more realistic” in that sense – it could be “more accurate” if a case study with documented observations would be available.
- p2 l38: what kind of “error”
- p3 l70: I very much appreciated the idea of data availability but was unable to open the corresponding link: “https://doi.org/10.5061/dryad.z8w9ghx9z”
- p3 Fig 1: could you indicate where the U and L test cases are located?
- p4 l 82 & 97: could you specify what “traditional” means in this context and which “automatically generated values” you refer to? Please comment on how vegetation is treated in terms of friction coefficients (scenario 4)?
- p4 l 82-95: The scenarios need to be more clearly defined. To me it remains unclear what impact on the simulation input the delineation of vegetation has (different friction parameters in this areas? Which ones?), see also major comments below.
- p5 l 106: How do you define and measure runout (please check more recent literature on model evaluation)? Which simulation results do you use? What is your reference and how is it measured?
- p6 l119: Is this due to the redistribution of the starting mass or due to changes in topography (or changes in friction coefficients (which ones?))? What are the differences between your test cases?
- pt Fig 3: What does “final” deposition height refer to – the flow depth at the last (which?) time step – to my knowledge the underlying flow model does not consider deposition?
- p7 l133: I think that it is only possible to show that change in input data implies a “dramatic change” in the simulation results. A dramatic “improvement” would only be possible if you compare to observation data.
Major comments:
- The fact that the outline of the LiDar DEM is insufficient to cover the whole avalanche path requires an ad hoc assumption, increasing the coulomb friction values by and arbitrary value of 0.4 to achieve “visible” runouts. This fact makes it impossible to properly interpret the simulation results with “automatic parameters” and very difficult to judge the “runout” results for the increased friction values (e.g. due to the fact that the avalanche may not even reach flatter terrain where “runout” differences may be way larger).
- I am not sure the authors succeed in identifying the source for the result variability (please see also recent publications on bayes methods for parameter / uncertainty handling in avalanche simulations for more information) for two major reasons:
1) the relation of changes in topography (or vegetation delineation) and friction coefficients with respect to result variability: Using “automated” friction coefficients infers a friction coefficient-dependence on the topography. Therefore not the topography directly (sliding surface from LiDAR) but the related friction coefficients may govern the result changes. These friction coefficients are optimized values and usually determined with respect to a specific spatial resolution and largely depend on the topography (usually something like 5m “winterly smoothed” terrain, the “automated” coefficients change not only with different topography (due to changes in curvature/slope etc) , but already with changing spatial resolution, since e.g. the magnitude of curvature is largely related to the spatial resolution). A similar question arises for the delineation of the vegetation: what does actually change in the delineated areas (friction coefficients?), to which degree does it change, and how do the manual and LiDAR delineations compare in terms of e.g. total area?
2) numerical rather than flow model based constraints: Changes observed between scenario 1-2 may either be related to the change in the sliding surface, the associates (automated) friction coefficient changes (see comment above) or even numerical reasons such as the utilized stopping criterion. For example Figure 3 refers to the final deposition height (or rather the spatial extend of flow height in the last time step?; see comment above). Usually this would refer to the time step where the stopping criterion is met (p. 4 l. 93: 5% or 10% momentum threshold respectively) and could therefore be different for each scenario/simulation (the differences in max. velocity and flow depth indicate that also the momentum threshold and therefore the final time step may be completely different for each scenario/simulation?). Thus the observed result changes would rather be a “numerical artifact” than an influence of the changes in release mass/topography/vegetation (or respective friction parameters). I therefore suspect that you are rather looking at a “numerical” variation rather then a “input data” or “flow model/input parameter” based one.
Relevant Literature
Heredia, María Belén, Eckert, Nicolas, Prieur, Clémentine, Thibert, Emmanuel: Bayesian calibration of an avalanche model from autocorrelated measurements along the flow: application to velocities extracted from photogrammetric images , Journal of Glaciology 66(257), Cambridge University Press, 373–385, 2020
Fischer, Jan-Thomas, Kofler, Andreas, Huber, Andreas, Fellin, Wolfgang, Mergili, Martin, Oberguggenberger, Michael: Bayesian Inference in Snow Avalanche Simulation with r. avaflow , Geosciences 10(5), Multidisciplinary Digital Publishing Institute, 191, 2020
Valero, Cesar Vera, Wever, Nander, Bühler, Yves, Stoffel, Lukas, Margreth, Stefan, Bartelt, Perry: Modelling wet snow avalanche runout to assess road safety at a high-altitude mine in the central Andes , Natural Hazards and Earth System Sciences 16(11), Copernicus GmbH, 2303, 2016
Citation: https://doi.org/10.5194/tc-2020-368-RC2 -
AC2: 'Reply on RC2', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #2 for their thoughtful comments and careful review of our manuscript.
Per your first major comment, we agree that the LiDAR field-of-view being unable to capture the entirety of the runout zone is an unfortunate byproduct of our study site and scanning location (also see reply to Referee #1, paragraph 2). Though this limitation required an ad hoc adjustment of friction coefficient values to limit runout extent, because it was consistently applied throughout all simulations, we maintain that it supports the central point of the communication: that runout distance and area, as well as all other modeled outputs (max velocity, flow height, etc.), can vary considerably when LiDAR data is incrementally incorporated for initialization. We also agree that it’s likely the extent would have varied even more between simulations had the slides been allowed to reach flatter terrain, which we believe further highlights the need for additional investigations on the merit of LiDAR data's use in avalanche dynamics modeling. While the magnitude and nature of this variability may differ between avalanche locations and scenarios, we found the sensitivity observed in our study notable, and worth sharing with the broader avalanche science and remote sensing community as a Brief Communication.
With regards to your second major comment, related to sources of variability, these are both insightful critiques, which we address below:
1) You are correct that in RAMMS, friction coefficients vary spatially with topography, and thus altering the sliding layer changes not only the topographic input, but also the dependent friction coefficient distribution, both of which impact simulation results. We believe this to be a relevant consideration, but beyond the scope of our study. In the work presented, we are only focused on investigating the sensitivity of an avalanche dynamics model (RAMMS) to LiDAR-derived inputs for initialization, as opposed to using generic inputs and assumptions. Therefore, we do not seek to quantify, investigate, or pontificate on the strengths and weaknesses of the model itself. For example, when using RAMMS, a different DEM might sometimes have a greater effect on results via friction coefficient distributions (associates, as you say), while other times the varied topography itself may play a larger role. It is not our intent to distinguish between these two, but rather to show that holding all other variables equal, the incorporation of LiDAR-derived inputs can produce drastically different simulation results. The same can be said of vegetation delineation and its influence on simulation results. In RAMMS, areas designated as vegetated have a scalar value added to what the unvegetated friction coefficient would have been, based on the local topography. Therefore, in the case of varied vegetation delineation (between manually identified, LiDAR inputs, etc.) the mechanism leading to differing simulation outputs is entirely related to altered friction coefficients by RAMMS, but this is an inherent property of the model, and therefore also not within the scope of our study. Based on our results, we suggest that regardless of how a model handles vegetated areas or topographic inputs for flow dynamics simulations, the sensitivity of the model to variations in its initialization must always be a primary consideration; particularly if the model is ever to be used as an operational tool.
2) With regards to numerical artifacts, though an interesting and important consideration, we contend this was not a major discrepancy in the cases presented here. First, at no point were we comparing between simulation sets with automated vs. increased friction values, nor across site locations; we were only comparing simulation cases to themselves when LiDAR data was incrementally incorporated (see Table 1). Second, we have no control over the time-domain of RAMMS. Intuitively, RAMMS will produce larger and faster avalanches with a larger mass and steeper slopes. This was especially the case in the ‘upper’ ridge site location, where max pressure, velocity, and height were all reached at the pinch points of the pairing hourglass features at mid-slope. In this case, the time step at which these maxima were achieved is not a relevant consideration, and we therefore maintain our position that differences in these maximum values are solely due to differences in the simulation inputs. Furthermore, in simulations with automated friction coefficients, stopping criterion (5% momentum threshold) were met when enough mass exited the field-of-view to drop the simulated momentum below the threshold, so any variation in the time step of that occurrence (which was minimal) wouldn’t impact maxima values recorded mid-simulation. Even if the entire field-of-view was observable and the stopping criterion were lowered to a 0% momentum threshold, debris at this point was spreading and decelerating, and we find it very unlikely that new maxima in any output variable would be recorded. Similarly, in simulations where friction coefficients were manually increased, the stopping criteria was met long after movement of the debris flow front had largely plateaued (see simulation .gifs), and thus differing final time steps would not have played a significant role in the final debris extent at simulation completion. For these reasons, we respectfully refute the suggestion that numerical artifacts played a significant role in our results and analysis.
Responses to your minor/technical comments:
- p1 l21: “common DEM” – what does common refer to?
In this case we are referring to an off-snow DEM of the ground surface. We will revise this line in the edited manuscript.
- p1 l25: does the “snow-snow” interface automatically correspond to the sliding layer – what about different avalanche types?
The snow-snow interface does intuitively correspond to the sliding layer within a start zone. In the transition and runout zones, the top snow surface represents the initial condition of the sliding layer, though as we discuss in our responses to Referees #1 and #3, this surface changes during the avalanche event due to erosion and entrainment. The point is, either is a better representation of the sliding surface initial conditions when compared to a ground DEM. Please see paragraph 1 from our response to Referee # 1.
- p2 l30: I am not sure you can refer to “more realistic” in that sense – it could be “more accurate” if a case study with documented observations would be available.
You are correct. We meant to convey that the LiDAR inputs better represent sliding topography, relevant vegetation, and spatially variable snow depth relative to traditional RAMMS initialization. The operational capacity of these inputs and their potential to improve RAMMS simulation results has yet to be verified (see paragraph 2 of our response to Referee #1). We will correct this line in our revised manuscript.
- p2 l38: what kind of “error”
Using a point measurement of snow depth to an interface of concern rather than accounting for spatial variability in snow depth atop the interface will inevitably result in a less precise estimate of release volume/mass. We will incorporate this context into that section.
- p3 l70: I very much appreciated the idea of data availability but was unable to open the corresponding link: https://doi.org/10.5061/dryad.z8w9ghx9z
The DOI is to be made available upon publication. In the interim, you may access the data assets at this link: https://datadryad.org/stash/share/W4ZoRKMTNE6QIXBJdDFdB_NQZpyxm230EbxR2fycNOU.
- p3 Fig 1: could you indicate where the U and L test cases are located?
Yes, we will add outlines of the U and L start zone boundaries to Figure 1 in the revised manuscript.
- p4 l 82 & 97: could you specify what “traditional” means in this context and which “automatically generated values” you refer to? Please comment on how vegetation is treated in terms of friction coefficients (scenario 4)?
In line 82 the sentence continues to elaborate on what we refer to as traditional: ”…“traditional” inputs were used, assigning a ground DEM as the sliding layer, a uniform snow depth across the entire start-zone, and a vegetated area delineated manually from photographs.” There is an automated process in RAMMS that computes the spatial distribution of friction coefficients based on topography. In areas delineated as vegetated, a scalar value is added to account for roughness by increasing friction. We will add this description to the revised manuscript.
- p4 l 82-95: The scenarios need to be more clearly defined. To me it remains unclear what impact on the simulation input the delineation of vegetation has (different friction parameters in this areas? Which ones?), see also major comments below.
Altering the sliding surface DEM changes both the sliding topography and the spatial distribution of topography-dependent friction coefficients. Accounting for spatial variability in snow depth atop an interface alters the release volume/mass. to vegetated areas, and thus variation in vegetation delineation alters the spatial distribution of friction coefficients. We will better describe how the incorporation of LiDAR data at each step is dealt with within RAMMS in the revised manuscript.
- p5 l 106: How do you define and measure runout (please check more recent literature on model evaluation)? Which simulation results do you use? What is your reference and how is it measured?
Runout was defined as the linear distance (when viewed aerially) from the stauchwall of the release area to the furthest extent of avalanche debris at the final time step in the simulation. We will state this definition in the revised manuscript.
- p6 l119: Is this due to the redistribution of the starting mass or due to changes in topography (or changes in friction coefficients (which ones?))? What are the differences between your test cases?
Presumably a combination of changes in the topography as well as changes in the spatial distribution of friction coefficients dependent on the changed topography. Parsing the individual contributions of these is beyond the scope of our preliminary study. Also, see the answer above regarding lines 82-95.
- pt Fig 3: What does “final” deposition height refer to – the flow depth at the last (which?) time step – to my knowledge the underlying flow model does not consider deposition?
Correct; we were referring to the flow depth at the final time step when the momentum threshold stopping criterion were met. We will state this in the revised manuscript.
- p7 l133: I think that it is only possible to show that change in input data implies a “dramatic change” in the simulation results. A dramatic “improvement” would only be possible if you compare to observation data.
We agree, and will revise the wording of that sentence in the updated manuscript.
Citation: https://doi.org/10.5194/tc-2020-368-AC2
-
AC2: 'Reply on RC2', James Dillon, 19 Feb 2021
-
RC3: 'Comment on tc-2020-368', Anonymous Referee #3, 28 Jan 2021
This contribution tackles an in general very interesting topic for a broad modelling community within cryospheric sciences: How does model input data influence model results? This question is worth investigating. The authors are interested in taking this question a step further by trying to imply that critical model input and the analysis of it has operational value. I think this is also an interesting problem that has impacts worth publishing. Unfortunately, there are major shortcomings in this paper of technical nature that need to be addressed:
- The LiDAR data does not fully cover the entire extent of the simulated avalanches: I think this critically limits your discussion and conclusion of the data since you have to introduce assumptions about the coulomb friction values by arbitrarily increasing them to achieve runouts that are visible to you. Any sort of interpretation falls short thereafter and must be seen in the light of this short coming. This makes me wonder if you succeed at all in teasing out the differences in your model runs using different snow / ground surfaces as input. Therefore, I am also unsure if you can address the operational value of your proposed method.
Asking for major revision on the technical part is likely not possible for you to achieve since you would have to acquire a new LiDAR dataset. So, I am unsure how you could address this issue in a sensible way, and I am interested in your response.
- I am not familiar with how RAMMS handles erosion into the snow when the simulated avalanche is in motion. However, in reality I would except the sliding surface to change constantly which is likely not resolved in RAMMS for good reasons and from my practical experience, RAMMS models avalanches runout (at least for large events) quit well. As a result I am unsure about your discussion on the dynamics of the simulated avalanches and their interaction with terrain and vegetation.
In general, the paper is well written and contains a good overview and introduction to the problem at hand and the field of avalanche modelling. I am not an English native speaker, so I am not commenting on language and writing style.
However, I believe that the presentation of the LiDAR data and the simulation data (eg. Fig 1,2,3) could be sharpened up by adding more topographic context, a scale bar, a larger better visible and distinguishable color bar (without rainbow colors).
I would also appreciate more details on the LiDAR scans in terms of georeferencing, co-registration, spatial resolution, which LiDAR scanner, which frequency, software etc.
Citation: https://doi.org/10.5194/tc-2020-368-RC3 -
AC3: 'Reply on RC3', James Dillon, 19 Feb 2021
The authors would like to sincerely thank Anonymous Referee #3 for their thoughtful comments and careful review of our manuscript.
Per your first major comment, we agree that the LiDAR field-of-view being unable to capture the entirety of the runout zone is an unfortunate byproduct of our study site and scanning location, however, we contend that the ad hoc friction coefficient adjustment was a suitable solution for the purposes of our study (see reply to Referees #1 and #2, lines 2 and 1, respectively). Furthermore, regarding operational value of initialization via LiDAR, we again refer you to our reply to Referee #1, paragraph 1. We maintain our position that RAMMS, or any other avalanche dynamics model, would most likely only benefit from the incorporation of high spatial and temporal resolution model inputs, whether derived from LiDAR or any other high spatial resolution remote sensing system (e.g., SAR or SfM). Although we are unable to conclusively show this at this time, relative to operational improvements, once additional data of other more well-documented avalanches becomes available, this will be able to be more thoroughly investigated.
Regarding your 2nd major comment, we again refer you to our response to Referee #1 (paragraph 1), who registered a similar concern on snow erosion. We agree that processes such as snow erosion and entrainment can significantly alter the sliding surface topography within the track of the avalanche. That said, the snow surface in an avalanche track and runout zone –the initial conditions of the sliding surface at the time of avalanche onset - is often quite different topographically from the ground surface beneath due to variable snow accumulation and redistribution. A better representation of those initial conditions is useful and a relevant consideration given RAMMS’ sensitivity to such inputs. This is in support of previous work (Buhler et al. 2011), where higher-spatial resolution DEMs were suggested, as it was noted that coarser resolution DEMs failed to represent complex topography in release and transition zones.
Your recommendations on how to improve the figures are appreciated. We will incorporate your input on these figures as well as additional technical details on georegistration and scanning specifications into our revised manuscript, per your comments.
Citation: https://doi.org/10.5194/tc-2020-368-AC3
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
892 | 573 | 58 | 1,523 | 41 | 47 |
- HTML: 892
- PDF: 573
- XML: 58
- Total: 1,523
- BibTeX: 41
- EndNote: 47
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1