the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Surface composition of debris-covered glaciers across the Himalaya using linear spectral unmixing of Landsat 8 OLI imagery
Adina E. Racoviteanu
Lindsey Nicholson
Neil F. Glasser
Download
- Final revised paper (published on 29 Sep 2021)
- Supplement to the final revised paper
- Preprint (discussion started on 08 Jan 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on tc-2020-372', Marin Kneib, 01 Feb 2021
Summary
The manuscript ‘Surface composition of debris-covered glaciers across the Himalaya using spectral unmixing and multi-sensor imagery’ by Racoviteanu et al. presents a linear spectral unmixing approach to categorize the surface of debris-covered glaciers in different surface types, including light and dark debris, ponds, vegetation and ice. The spectral endmembers are derived from the Khumbu area and the method is validated for this domain. The method is then applied to the whole Himalaya. The main output is a map of surface characteristics for all debris-covered glaciers across the Himalaya, including vegetation and ponds. The study finally discusses some of the controls of occurrence of vegetation and ponds.
The focus of this work is interesting and relevant. The ponds and vegetation maps especially give interesting information on the state of the debris-covered glaciers (a more advanced state being characterized by larger ponds and extensive vegetation), and are the first steps to monitor the variability of ponds at the large scale. This is of interest to understand the glacier system but also to monitor GLOF hazard. These features are also known to contribute significantly to the mass balance of debris-covered glaciers.
However, I have a number of major methodological comments and issues regarding the soundness, validation and transferability of the methods that are still unclear and that would need to be properly addressed for the paper results and conclusions to be solid and credible. I also think that the analysis of the pond and vegetation controls needs to be improved and should better account for past work (especially for supraglacial ponds), especially with regards to the Himalayan climate gradient and the glacier-scale pond variability, which would greatly improve the value of this manuscript. For these reasons I recommend that this work undergo major revisions. Finally, I have included a number of minor comments meant mostly to improve the general readability of the manuscript.
General comments
Choice of endmembers and validation: my understanding from the manuscript is that the Pléiades (and RapidEye) images used here were used as a quality control of the surface type of some Landsat pixels. For this to work, you would need to make sure that the Landsat pixels are composed only of one surface type, or at least quantify the different surface types. This can be done fairly easily with some manual delineation from the Pléiades images, however one thing to be careful with here is the coregistration of the Pléiades and Landsat 8. Somewhere in the text is mentioned that the position of Landsat 8 is accurate within 50 m, which may translate to significant surface type changes, i.e. the surface characterized using the Pléiades image may not be the same as the Landsat one if the images are not correctly aligned. Proper alignment of Pléiades/RapidEye and Landsat (using cross correlation techniques for instance) should be ensured and demonstrated in the paper.
Focus regions: the method is applied to the entirety of the Himalaya, with a focus on three regions meant to represent the climate variability across the domain. The Lahaul Spiti is very far to the West and the Khumbu and Bhutan are relatively close, in the eastern part of the range. To really represent the climatic gradient as described in the manuscript, I strongly recommend adding one or two study regions in between Khumbu and Lahaul Spiti domains.
Generalization of the method to all of the Himalaya: the method was calibrated and validated for one specific location of the Himalaya and only for one Landsat 8 image. Further checks are needed to demonstrate the transferability of the method to the whole Himalayan range. I recommend the authors to validate the surface composition maps they obtain for at least 1 (better 2 or more) other site and Landsat 8 image. Given the free availability of Rapid Eye imagery using a Planet academic licence, this would be easily achievable.
Controls on supraglacial ponds: this is an interesting point and one of the, if not the, main outcomes of this manuscript. However, the analysis conducted here is very simplistic, especially considering the work from past studies, and further analysis is needed here to show the significance of such results. It is difficult to see anything in the related figure 11. I would suggest conducting a more detailed analysis of the controls, especially by partitioning glaciers in elevation bands since the ponds are very variable already at the glacier-scale (this is obvious comparing Khumbu upper and lower sections for instance). The ‘slope’ derivation is unclear - does it relate to longitudinal surface gradient? (Quincey et al., 2007; Miles et al., 2017; King et al., 2020). Consideration of surface depressions/topography (Benn et al., 2017; Miles et al., 2017; King et al., 2020; Salerno et al., 2016) and velocity (Miles et al., 2017) would also be welcome.
Use of the SAM method: The use of the SAM in this manuscript raises a few questions:
- Why use it over the whole Landsat image if the focus is on debris-covered glaciers?
- Why use it only for the Khumbu domain if the aim is to provide maps at the scale of the Himalaya?
- The main advantage of this approach stated here is that ‘it is relatively insensitive and albedo effects’. This is because this method looks at the relative differences between the spectra, which is also the case with linear spectral unmixing when it respects the ‘sum-to-unity’ constraint. The advantage of the SAM over the LMM is therefore not clear.
- In part 3.1, L 357, you say that ‘The SAM method is presented here only as an additional verification on the endmembers chosen’. This is in contradiction with the presentation of the SAM in the methods.
Based on these different points, I feel that the SAM does not bring much added value to this manuscript, but instead is an additional method that adds confusion to the results. I therefore suggest removing it entirely, unless it was indeed used to select endmembers, in which case, two sentences about this in the methods should be largely enough (the ‘endmember selection’ part is already very detailed).
Line-by-line comments
Title
Title: specify ‘linear spectral unmixing’
Title: replace ‘multi-sensor’ with ‘Landsat 8’. Pléiades & RE imagery are only used for calibration & validation at a very specific location. Not used for the unmixing.
Abstract
L 13: The study area corresponds to the Himalaya, remove ‘Hindu Kush’
L 14: ‘covered with’
L 14: specify ‘rock debris’
L 17: Add accent on Pléiades - change throughout text
L 18: specify Landsat 8 - add throughout text whenever required
L 22: The surface composition maps are still 30 m. What do you mean by ‘finer classification’?
L 24: i would not qualify the missing 19.7% as negligible
L 25: This might need to be retoned - see comments on the results regarding the seasonal variability of supraglacial ponds.
Introduction
L 34: remove ‘Hindu Kush’
L 35: suggest adding reference here - e.g. Sherler et al., 2018 and Herreid and Pellicciotti, 2020
L 36: rock falls - plural
L 42: add references to energy-balance descriptions of ice cliffs. E.g. Sakai et al., 2002; Reid and Brock, 2014; Steiner et al., 2015; Buri et al., 2016a & b
L 44-45: these references don’t really fit here. You could keep Reid and Brock, 2010 for melt under debris. Add other references to melt under debris and melt of ice cliffs & ponds. The link with Foster et al., 2012 and Miles et al., 2020 is not clear here.
L 51: Shugar et al., 2020a -> 2020. Change also in references. Check throughout text.
L 54: ‘properties of the supraglacial debris’ - not clear. Replace with ‘number/area of ponds’
L 55: i.e. -> and
L 57: There are many more studies looking at the evolution of supraglacial ponds, including some already cited in this manuscript (Miles et al., 2017; Watson et al., 2016; Liu et al., 2015 …). Add more references here or specify the statement.
L 60: debris cover -> debris-covered
L 60: the references for transition of dcgs to rock glaciers is largely incomplete. Other relevant references that could be added: Jones et al., 2019; Knight et al., 2019 and related literature
L 62: references?
L 63: Shugar 2020b -> 2020. Check throughout text.
L 63: Wangchuk and Bolch, 2020: this is more a methods paper than a regional inventory.
L 63: add Chen et al., 2020
L 64: ‘not consistent’ - this is very vague. Specify.
L 64-66: this is repeated from the previous sentence. Remove
L 67: methodologies. Use plural.
L 67: is the methodology really the problem here? My feeling is that the main issue with mapping these relatively small features comes from the resolution of the sensor used to map them more than the method - see Watson et al., 2018 for comparison of sensors to map ponds with NDWI. Also in Kneib et al., 2020, we decided to use a NDWI instead of an LSU approach to map the supraglacial ponds.
L 68: remove ‘only’
L 69: Google Earth is a tool, not a set of data. Usually it’s Landsat images that are shown there.
L 69: Other relevant references for mapping multispectral images: Miles et al., 2017; Steiner et al., 2019; Kneib et al., 2020, Kraaijenbrink et al., 2016, Anderson et al., 2019, among others.
L 70: Other relevant references for mapping with topographic models: Herreid and Pellicciotti, 2018; Westoby et al., 2020
L 73: add Steiner et al., 2019 for ice cliffs in Langtang. For ponds, see Liu et al., 2015.
L 85: Object Based … add capital letters
L 78-89: Incomplete. In this paragraph you need to cite Scherler et al., 2018 and Herreid and Pellicciotti, 2020 - the current 2 global datasets for on-glacier debris-cover extents, and where they fit in terms of methods.
L 90-114: this part lacks organization and the reader is lost. L 108-110 should go in the previous paragraph. L111-114 does not tie well here and should go higher with the description of applications of spectral unmixing.
L 101: define spectral unmixing here in a few words.
L 105: We also used spectral unmixing to map ice cliffs in a recently published study (Kneib et al., 2020)
L 111: The Xie et al. references do not quantify supraglacial features (cliffs or ponds) but are focused on the debris-covered area. Wangchuk and Bolch, 2020 use Sentinel imagery, not Landsat. L 111-114 is unclear and could be removed or at least rearranged.
L 118: teste -> test
L 118: on -> over
L 119: you need to demonstrate the transferability of the approach. See major comment.
L 127-128: Transferability to open-source software is not addressed in the manuscript.
Methods and data sources
L 141: add Brun et al., 2018; Kneib et al., 2020
L 141-142: Are these changes in glacier area? Specify if so
L 146: There is more variability than 7-8 %. See numbers from and refer to Watson et al., 2017, who looks at the full Khumbu region, and Kneib et al., 2020 for Khumbu glacier.
L 149: i do not think that the decimals are needed here - unless they correspond specifically to the bounds of the Landsat images that were used?
L 151: add Maurer et al., 2019
L 153-158: The difference between the different regions is a bit unclear. I would insist on this idea of monsoon gradient (Bookhagen and Burbank, 2010).
L 163, 165 & 171: specify that the Pléiades and RE data were only used for the Khumbu region
L 165-166: ‘so we … Landsat data’. Not necessary, remove.
L 170: need-> needed
L 170: the images are not entirely cloud-free.
L 171-173: It is actually very important that all the images are from the post-monsoon and i would recommend insisting on this, since even for different years you would expect similar surface conditions. This is especially true for ponds (Miles et al., 2017)
L 176: images per acquisition -> images (there is only on Pléiades acquisition). Similarly, remove ‘fall acquisition’ (L 178)
L 179: specify snow-free in the debris-covered part
L 180: reference for ERDAS?
L 182: image parts -> scenes
L 182: using -> with
L 183: 4, 3 (space missing)
L 183-184: Have you considered correcting the Pléiades image to surface reflectance? This would give you an idea of what the spectral values are there for in an image for which you can determine the composition well. I am also surprised by the use of RapidEye image with Pléiades images, as if they were equivalent (the RE image is almost not mentioned in everything that follows). The spatial resolution is indeed very different (also the spectral), and the RE image is corrected to surface reflectance while the Pléiades are not. If you consider the RE image to be enough, using Pléiades sounds like an ‘overkill’ since the RE images are freely available on Planet.com with an academic licence.
L 187: the RapidEye images were resampled? Why and to what resolution?
L 192: elevation data used here is also remote sensing data - the tiles of 2.2 & 2.3 need to be clearer
L 192: This part is confusing and mostly unnecessary. Just specify what data was used, for what reason (vertical accuracy and less voids) and what you extracted from it.
L 193: against what did you evaluate the 2 datasets? A check could be to test against the Pléiades DEM. Also, for this study the absolute elevation values are not so much of interest, as long as they represent the topography well.
L 193: specify time period covered by these datasets.
L 196: If the goal is to make a thorough comparison of the 2 datasets (which i do not think is needed), you will want to compare quantitatively the data void area
L 202: remove ‘of interest’
L 205-208: not needed here.
L 219: Why such low elevation?
L 202-203: ‘which provided…’. Remove. Already stated before.
L 229: ‘debris and/or ice cliffs, ponds…’
L 234-234 & 236: Shouldn’t it be ‘linear unmixing models’? Also, in the manuscript you use LMM but also spectral unmixing in an equivalent way. I would recommend sticking with one term and using it consistently.
L 243: ‘for variable illumination conditions’: explain better or remove whole sentence.
L 243-245: why?
L 245: ‘fully-constrained’ - remove space
L 249-253: See major comments.
L 256-270: Was the endmember selection done within the debris-covered area or using the whole scene? I suspect the whole image. How did you distinguish clean ice from snow? These two surface types can be difficult to distinguish from one another.
L 257: ROIs -> actually just single pixels.
L 257: MNF is only used once and therefore does not require an acronym. Same thing for PPI (L 259) and SMAAC (L 261)
L 258: What are then the results of this MNF? And this SMAAC?
L 265: remove ‘ice’
L 267-268: this does not make sense, why take clean ice and cloud endmembers then?
L 270: i understand that you are using the Pléiades image to check qualitatively the surface type of the Landsat pixels. This could be made a bit clearer in the text (the exact use of the Pléiades image). Furthermore, this raises a few questions (see major comment):
- Did you coregister the Pléiades/RapidEye with the Landsat image?
- For this to be valid, you need to make sure that either the pixel has only one surface type, or to quantify the surface types within the pixel
L 256-277: endmembers may vary from scene to scene, depending on the spectral characteristics (not likely for Landsat), the illumination, the geology (for the debris pixels). Assessment of their transferability is required to validate mapping across all the Himalaya (see major comments).
L 269-270: how many pixels does this make then? 6? You could consider showing them in one of the figures (Fig. 1 for example. Same thing for the validation pixels, but maybe this will be too much)
L 275: The data from Casey et al., 2012 would be more relevant (also for the Khumbu area)
L 275-277: show the spectra from Matta et al., 2017; Naegeli et al., 2015; 2017; Casey et al., 2012 in Fig. 3b.
L 280-281: spectral unmixing -> linear spectral unmixing (or LMM) to be used consistently
L 282: this evolves through the manuscript, sometimes ‘SDC v.1 dataset’, sometimes ‘SDC’, sometimes ‘SDC dataset’... i suggest using just SDC consistently.
L 282: Landsat 8 and Sentinel-2
L 283: There is only one Scherler et al., 2018. Correct throughout text and also in the references.
L 283-284: not particularly relevant, remove.
L 287: What about expanding debris-covered areas (Kamp et al., 2011; Thakuri et al., 2014; Xie et al., 2020; Bhambri et al., 2011...)?
L 290-291: How was this automated check and repair performed?
L 291: What about the other areas?
L 300-301: how was this evaluation performed?
L 302-308: this is mostly repeated from before. Remove/reorganize.
L 310-311: what happens if a pixel satisfies two different thresholds?
L 315: ‘finer classification map’: the maps are still 30 m resolution and only one surface type is selected in the end?
L 321: remove space
L 331-332: it would help to show these ‘validation’ pixels on a map
L 333-335: reference?
L 337: How? Confusion matrix? Final pond area?
L 339: Watson et al. (2018).
Results
L 341: see comments above. I suggest removing this whole part.
L 343: in figure 4, how were the shadows mapped?
L 343: ‘well’: qualitative statement, difficult to interpret
L 352-353 & 356: if this is the case, this should appear in the methods
L 359: this part is very descriptive and qualitative. Shorten and merge with 3.3.
L 362-365: move to discussion.
L 366: RMSE
L 367-368: remove sentence (repeated)
L 370: Is ‘normalized fractional maps’ not the equivalent of ‘partially constrained’? In both cases the coefficients of the unconstrained model were normalized? This would then be in contradiction with what is stated above?
L 376-377: The names of the glaciers do not appear in the figures, which makes it very difficult for anyone who’s not familiar with that area to follow.
L 384: define Kappa coefficient. It is the only occurrence of this coefficient, is it really relevant?
L 387-388: give an estimate of the cloud-covered area
L 388-390: discussion.
L 384-400: this section is repetitive, especially because the accuracy values are the same than the producer’s accuracy. I suggest using the Dice coefficient (Dice, 1945), which takes producer and user accuracy into account in one metric (add it to Table 2). This would make this part easier to read.
L 405: debris-covered
L 415: correctly -> usually
L 421-422: this is mostly discussion.
L 428: rather than climatic patterns, i would think this to be related to the local meteorology.
L 436: Specify Fig. 8b for Labeilong Gl.
L 438-441: discussion.
L 449: it is really difficult to make out the ponds in figure 6. Should it not be figure 9 instead?
L 455-457: it would make sense to use a confusion matrix with a Dice coefficient instead. Part of this paragraph would fit better in the methods.
L 457-459: Not relevant, this is for snow and this paragraph is about water. Plus, this belongs to the discussion.
L 473: It would be interesting to compare these results with the results you would obtain with an NDWI-based approach, following the same calibration-validation scheme than for the spectral unmixing
Discussion
L 481: debris-covered
L 484-486: you mention this distinction between light and dark debris as coming from the geology. Could this not also be related to the debris water content? Especially if the debris is very thin (as for the thinly debris-covered ice cliffs), i suspect that this could play a role?
L 492: the analysis
L 492-493: remove ‘chosen … image’
L 494: reference? Mention that in the post-monsoon the snow-cover is usually minimal (unless there are early snowfalls, which can happen)
L494-497: Note that there can be, especially in the post-monsoon, very bright cliffs. This is true after a light snowfall when the snow sticks longer to the ice than to the rock, but i have also seen a lot of cliffs with clean ice, especially in the post monsoon. Your figure 2b is a good example, you can also have a look at Kneib et al., 2020, figure 1 - in this paper we used 2 thresholds to map ice cliffs: one for the clean ice and one for the dirty ice. Some of this clean ice could also come from ice sails (Evatt et al., 2017) - there are some of these in the upper part of Khumbu, and they are common on glaciers in the western part of the range. The main limitation for these two features is obviously the size, and the mapping will be limited to the largest features.
L 499-500: Mapping ice cliffs with a 30 m resolution sensor is not realistic, and the results would not be representative. I suggest removing this sentence.
L 503-504: more than the method or the spectral resolution, the limitation will be the spatial resolution. References to studies focused on ice cliff delineation would be welcome.
L 505-517: Ponds are difficult because they are very variable from one season to the other and from one year to the next. The area covered by ponds should be minimal in the post-monsoon (e.g. Miles et al., 2017). This point should be highlighted in the discussion, with references to studies looking at pond variability (Liu et al., 2015; Miles et al., 2017; Watson et al., 2016). It would also be interesting to compare the numbers you get for other regions with other studies (Liu et al., 2015; Miles et al., 2017; Watson et al., 2016; Kneib et al., 2020) focusing on other glaciers.
L 510-511: explain this statement.
L 514: in on -> in
L 515: suggest: angles -> gradient. Also, how is this calculated? Specify in methods.
L 516: reference?
L 517: reference?
L 520-521: it is not clear why the bright non-glacierized area would not be mapped as light debris instead of vegetation.
L 530: simplify sentence
L 532: acronym already introduced in introduction. Since only used twice, acronym is probably not necessary
L 535-545: this should go in the methods & results.
L 536: debris-covered
L 536-537: what does the size of the glacier have to do with the turbidity of the ponds?
L 540: How did you derive the slope? Explain. Seeing that the pond coverage is so variable even at the scale of one glacier, and so is probably the slope, my suggestion would be to look at the results in terms of elevation bands (or other glacier partitioning - possibly based on slope?)
L 529: A lot of this paragraph should go in the methods + results.
L 554: Have you looked at the relationship (for ponds and vegetation) with debris stage? (Herreid and Pellicciotti, 2020).
L 563: errors in the SDC
L 565: it would be interesting to look at the changes of ponds and vegetation from east to west more in details
L 566: ‘cannot be examined here in detail’ -> ‘is beyond the scope of this study’. I disagree, i think the analysis can be taken a bit further with the available data. It would actually add a lot of value to this manuscript.
L 575-581: this is not convincing. One problem is that the fraction of water will also be lower at the pond margins - and since Landsat 8 has a relatively low resolution, this will be the case for most pond pixels. You also do not present any results on this topic. As such, this paragraph can be removed.
L 576: ‘fraction of a pond pixel covered…’
L 584: this is not the focus here since the delineation was applied only to ponds within the debris-covered area. Also, the main difference noted between the different datasets is the mapping of the supraglacial ponds, while it is noted that there are no major differences for lakes outside the glacier areas. Therefore i would not mention the lakes outside the glacier boundaries but focus on the mapping of the supraglacial ponds, which is still a relevant discussion point. Finally, one problem that arises when applying your approach to off-glacier lakes will be the endmembers you used, since the turbidity of the lakes, but also their depth, will be quite different from those of the supraglacial ponds.
L 591: this ‘outperformance’ is only true for supraglacial ponds (at least in figure 13)
L 599: note that the outlines shown from Chen et al., 2020 have obviously been manually delineated.
L 600-601: you need to mention the pond variability, which could explain some of the differences here.
L 609: in -> to
L 613: this needs to be proven
L 615-616: what corrections do you have in mind? So this is not a final product?
L 630-631: Where are these results? Are they of any use? If yes, they should be discussed in more details
L 631-632: in Kneib et al., 2020 we also used a light and dark debris endmember
L 639: were they removed or were they not?
L 642-646: It seems that the use of this Scherler et al., 2018 SDC triggered a lot of small issues and it occupies a large part of the methods, results and discussion. No inventory will ever be perfect but do you think that your results could have been improved using the Herreid and Pellicciotti, 2020 dataset, that claims to be ‘better’ than the SDC you used? The main drawback of this dataset being that they used updated glacier outlines…
L 651: The Pléiades and RapidEye were only used for endmember selection and validation of the method.
L 653-655: This has not been proven
L 656-659: my understanding is that Shugar et al., 2020 used a mosaic of Landsat images to map the lakes, which means that only the persistent large lakes would be mapped anyways. So this problem is not related to the NDWI. The NDWI approach may not be perfect, but some studies have demonstrated that it works fairly well (Miles et al., 2017; Watson et al., 2018). I am not convinced by this point and would recommend a comparison of your results with a NDWI-based approach (following the same calibration scheme as for the spectral unmixing).
L 668: remove ‘is of interest’
L 689-690: remove last sentence.
L 702-703: this is a key item and one of the main results of this study. It will be useful to have a link in the article.
L 975: remove, appears twice.
Tables
Table 2: use same number of decimals in the whole table
Table 6: explain in caption what ‘manual’ and ‘automated’ spectral unmixing refer to. All these outlines are from this study - no need to specify.
Figures
Figure 1: caption: give description of images used for Himalaya map.
Figure 2: acknowledge pictures’ photographer(s).
Figure 3: caption: for panel b, add references. Add titles for the 2 panels. Panel a: plot x axis as wavelengths and use the same x axis for panel a & b. Why are the values so low for the debris in panel b compared to panel a?
Figure 5: only one legend needed. Same thing for scale & N arrow. Increase visibility of legend
Figure 6: I cannot see any letters in the maps. Only 1 legend, N arrow & scale (if same scale) needed. Consider adding a panel where you zoom in on a glacier tongue to see the ponds/vegetation/ clean ice better. No clouds in this image then?
Figure 7: the Pléiades has a very ‘red’ appearance - i suggest adjusting band composition to make it look more like what the human eye would see and doing it consistently in the other figures. Usually supraglacial lakes do not have a crescent shape.
Figure 8: I cannot see the black arrows mentioned in the caption.
Figure 9: the comparison of the lakes is not clear. It would help to have one of the datasets fully transparent, with just the outlines. The debris-cover outlines box is hard to see in the legend.
Figure 10: sq.km. -> km2
Figure 11: Hard to see anything. Change colors. Plot in different panels ponds and vegetation. Try log scale (for panel b at least that could be useful). sq.km. -> km2.
Figure 12: describe what the background images are. Only one scale and one N arrow needed. It is difficult to make out anything. Try zooming in a bit? The red-green combination in panel b is not ideal.
Figure 13: Consider increasing background transparency to make the outlines stand out. It’s difficult to see the purple outlines. Also for the light blue ones. Consider increasing line width.
References
Anderson, L.S., Armstrong, W., Anderson, R., Buri, P., 2019. Debris cover and the thinning of Kennicott Glacier, Alaska, Part B: ice cliff delineation and distributed melt estimates. Cryosph. Discuss. 1–29. https://doi.org/10.5194/tc-2019-177
Benn, D.I., Thompson, S., Gulley, J., Mertes, J., Luckman, A., Nicholson, L., 2017. Structure and evolution of the drainage system of a Himalayan debris-covered glacier, and its relationship with patterns of mass loss. Cryosph. 11. https://doi.org/10.5194/tc-11-2247-2017
Bhambri, R., Bolch, T., Chaujar, R.K., Kulshreshtha, S.C., 2011. Glacier changes in the Garhwal Himalaya, India, from 1968 to 2006 based on remote sensing. J. Glaciol. 57, 543–556. https://doi.org/10.3189/002214311796905604
Bookhagen, B., Burbank, D.W., 2010. Toward a complete Himalayan hydrological budget: Spatiotemporal distribution of snowmelt and rainfall and their impact on river discharge. J. Geophys. Res. Earth Surf. https://doi.org/10.1029/2009JF001426
Brun, F., Wagnon, P., Berthier, E., Shea, J.M., Immerzeel, W.W., Kraaijenbrink, P.D.A., Vincent, C., Reverchon, C., Shrestha, D., Arnaud, Y., 2018. Ice cliff contribution to the tongue-wide ablation of Changri Nup Glacier, Nepal, central Himalaya. Cryosph. 12, 3439–3457. https://doi.org/10.5194/tc-12-3439-2018
Buri, P., Miles, E.S., Steiner, J.F., Immerzeel, W.W., Wagnon, P., Pellicciotti, F., 2016a. A physically based 3-D model of ice cliff evolution over debris-covered glaciers. J. Geophys. Res. Earth Surf. 121, 2471–2493. https://doi.org/10.1002/2016JF004039
Buri, P., Pellicciotti, F., Steiner, J.F., Miles, E.S., Immerzeel, W.W., 2016b. A grid-based model of backwasting of supraglacial ice cliffs on debris-covered glaciers. Ann. Glaciol. 57, 199–211. https://doi.org/10.3189/2016AoG71A059
Casey, K.A., Kääb, A., Benn, D.I., 2012. Geochemical characterization of supraglacial debris via in situ and optical remote sensing methods: a case study in Khumbu Himalaya, Nepal. Cryosph. 6, 85–100. https://doi.org/10.5194/tc-6-85-2012
Chen, F., Zhang, M., Guo, H., Allen, S., Kargel, J., Haritashya, U., Watson, C.S., 2020. Annual 30-meter Dataset for Glacial Lakes in High Mountain Asia from 2008 to 2017. Earth Syst. Sci. Data Discuss. 1–29. https://doi.org/10.5194/essd-2020-57
Dice, L.R., 1945. Measures of the Amount of Ecologic Association Between Species. Ecology 26, 297–302. https://doi.org/10.2307/1932409
Evatt, G.W., Abrahams, I.D., Heil, M., Mayer, C., Kingslake, J., Mitchell, S.L., Fowler, A.C., Clark, C.D., 2015. Glacial melt under a porous debris layer, in: Journal of Glaciology. International Glaciology Society, pp. 825–836. https://doi.org/10.3189/2015JoG14J235
Evatt, G.W., Mayer, C., Mallinson, A., Abrahams, I.D., Heil, M., Nicholson, L., 2017. The secret life of ice sails. J. Glaciol. 63, 1049–1062. https://doi.org/10.1017/jog.2017.72
Herreid, S., Pellicciotti, F., 2020. The state of rock debris covering Earth’s glaciers. Nat. Geosci. 1–7. https://doi.org/10.1038/s41561-020-0615-0
Herreid, S., Pellicciotti, F., 2018. Automated detection of ice cliffs within supraglacial debris cover. Cryosph. 12, 1811–1829. https://doi.org/10.5194/tc-12-1811-2018
Jones, D.B., Harrison, S., Anderson, K., 2019. Mountain glacier-to-rock glacier transition. Glob. Planet. Change 181, 102999. https://doi.org/10.1016/j.gloplacha.2019.102999
Kamp, U., Byrne, M., Bolch, T., 2011. Glacier fluctuations between 1975 and 2008 in the Greater Himalaya Range of Zanskar, southern Ladakh. J. Mt. Sci. 8, 374–389. https://doi.org/10.1007/s11629-011-2007-9
King, O., Turner, A.G.D., Quincey, D.J., Carrivick, J.L., 2020. Morphometric evolution of Everest region debris-covered glaciers. Geomorphology 371, 107422. https://doi.org/10.1016/j.geomorph.2020.107422
Kneib, M., Miles, E.S., Jola, S., Buri, P., Herreid, S., Bhattacharya, A., Watson, C.S., Bolch, T., Quincey, D., Pellicciotti, F., 2020. Mapping ice cliffs on debris-covered glaciers using multispectral satellite images. Remote Sens. Environ. 112201. https://doi.org/10.1016/j.rse.2020.112201
Knight, J., Harrison, S., Jones, D.B., 2019. Rock glaciers and the geomorphological evolution of deglacierizing mountains. Geomorphology 324, 14–24. https://doi.org/10.1016/j.geomorph.2018.09.020
Kraaijenbrink, P.D.A., Shea, J.M., Pellicciotti, F., De Jong, S.M., Immerzeel, W.W., 2016. Object-based analysis of unmanned aerial vehicle imagery to map and characterise surface features on a debris-covered glacier. Remote Sens. Environ. 186, 581–595. https://doi.org/10.1016/j.rse.2016.09.013
Liu, Q., Mayer, C., Liu, S., 2015. Distribution and interannual variability of supraglacial lakes on debris-covered glaciers in the Khan Tengri-Tumor Mountains, Central Asia. Environ. Res. Lett.
Matta, E., Giardino, C., Boggero, A., Bresciani, M., 2017. Use of Satellite and In Situ Reflectance Data for Lake Water Color Characterization in the Everest Himalayan Region. Mt. Res. Dev. 37, 16–23. https://doi.org/10.1659/mrd-journal-d-15-00052.1
Maurer, J.M., Schaefer, J.M., Rupper, S., Corley, A., 2019. Acceleration of ice loss across the Himalayas over the past 40 years. Sci. Adv. 5, eaav7266. https://doi.org/10.1126/sciadv.aav7266
Miles, E.S., Willis, I.C., Arnold, N.S., Steiner, J., Pellicciotti, F., 2017. Spatial, seasonal and interannual variability of supraglacial ponds in the Langtang Valley of Nepal, 1999-2013. https://doi.org/10.1017/jog.2016.120
Miles, E.S., Willis, I.C., Arnold, N.S., Steiner, J., Pellicciotti, F., 2013. Spatial, seasonal and interannual variability of supraglacial ponds in the Langtang Valley of Nepal, 1999-2013. https://doi.org/10.1017/jog.2016.120
Quincey, D.J., Richardson, S.D., Luckman, A., Lucas, R.M., Reynolds, J.M., Hambrey, M.J., Glasser, N.F., 2007. Early recognition of glacial lake hazards in the Himalaya using remote sensing datasets. Glob. Planet. Change 56, 137–152. https://doi.org/10.1016/j.gloplacha.2006.07.013
Reid, T.D., Brock, B.W., 2014. Assessing ice-cliff backwasting and its contribution to total ablation of debris-covered Miage glacier, Mont Blanc massif, Italy. J. Glaciol. https://doi.org/10.3189/2014JoG13J045
Reid, T.D., Brock, B.W., 2010. An energy-balance model for debris-covered glaciers including heat conduction through the debris layer. J. Glaciol. https://doi.org/10.3189/002214310794457218
Sakai, A., Nakawo, M., Fujita, K., 2002. Distribution Characteristics and Energy Balance of Ice Cliffs on Debris-covered Glaciers, Nepal Himalaya. Arctic, Antarct. Alp. Res. 34, 12–19. https://doi.org/10.1080/15230430.2002.12003463
Salerno, F., Thakuri, S., Guyennon, N., Viviano, G., Tartari, G., 2016. Glacier melting and precipitation trends detected by surface area changes in Himalayan ponds. Cryosph. 10, 1433–1448. https://doi.org/10.5194/tc-10-1433-2016
Scherler, D., Wulf, H., Gorelick, N., 2018. Global Assessment of Supraglacial Debris-Cover Extents. Geophys. Res. Lett. https://doi.org/10.1029/2018GL080158
Shugar, D.H., Burr, A., Haritashya, U.K., Kargel, J.S., Watson, C.S., Kennedy, M.C., Bevington, A.R., Betts, R.A., Harrison, S., Strattman, K., 2020. Rapid worldwide growth of glacial lakes since 1990. Nat. Clim. Chang. 10, 939–945. https://doi.org/10.1038/s41558-020-0855-4
Steiner, Buri, Miles, Ragettli, Pellicciotti, 2019. Supraglacial ice cliffs and ponds on debris-covered glaciers: Spatio-temporal distribution and characteristics. J. Glaciol. 65, 617–632. https://doi.org/10.1017/jog.2019.40
Steiner, J.F., Pellicciotti, F., Buri, P., Miles, E.S., Immerzeel, W.W., Reid, T.D., 2015. Modelling ice-cliff backwasting on a debris-covered glacier in the Nepalese Himalaya. J. Glaciol. 61, 889–907. https://doi.org/10.3189/2015JoG14J194
Thakuri, S., Salerno, F., Smiraglia, C., Bolch, T., D’agata, C., Viviano, G., Tartari, G., 2014. Tracing glacier changes since the 1960s on the south slope of Mt. Everest (central Southern Himalaya) using optical satellite imagery. Cryosph. 8, 1297–1315. https://doi.org/10.5194/tc-8-1297-2014
Wangchuk, S., Bolch, T., 2020. Mapping of glacial lakes using Sentinel-1 and Sentinel-2 data and a random forest classifier: Strengths and challenges. Sci. Remote Sens. 2, 100008. https://doi.org/10.1016/j.srs.2020.100008
Watson, C.S., King, O., Miles, E.S., Quincey, D.J., 2018. Optimising NDWI supraglacial pond classification on Himalayan debris-covered glaciers. Remote Sens. Environ. 217, 414–425. https://doi.org/10.1016/j.rse.2018.08.020
Watson, C.S., Quincey, D.J., Carrivick, J.L., Smith, M.W., 2016. The dynamics of supraglacial water storage in the Everest region, central Himalaya. Glob. Planet. Change 142, 14–27. https://doi.org/10.1016/j.gloplacha.2016.04.008
Westoby, M.J., Rounce, D.R., Shaw, T.E., Fyffe, C.L., Moore, P.L., Stewart, R.L., Brock, B.W., 2020. Geomorphological evolution of a debrisâcovered glacier surface. Earth Surf. Process. Landforms 45, 3431–3448. https://doi.org/10.1002/esp.4973
Xie, F., Liu, S., Wu, K., Zhu, Y., Gao, Y., Qi, M., Duan, S., Saifullah, M., Tahir, A.A., 2020. Upward Expansion of Supra-Glacial Debris Cover in the Hunza Valley, Karakoram, During 1990 ∼ 2019. Front. Earth Sci. 8, 308. https://doi.org/10.3389/feart.2020.00308
Review by Marin Kneib (marin.kneib@wsl.ch)
Citation: https://doi.org/10.5194/tc-2020-372-RC1 -
AC1: 'Initial reply on RC1', Adina Racoviteanu, 22 Feb 2021
Thank you for the thorough review, and for raising pertinent points related to the methodology used, its transferability and the validation of our results. As a first reply during the open discussion phase, we simply address the 5 main points raised, but note that the detailed minor comments will all be addressed in a revised manuscript and these are also valuable improvements to clarity of the writing and the representation of the existing literature.
We would like to clarify from the start that while using the Landsat archive has the benefit of allowing application over the past decades, these data have limitations. Hence, while we present an analysis of the possible controls on ponds and vegetation, we acknowledge that some of these controls may not be possible to extract with the current data. Given the limitations, our intent with this publication was to focus on demonstrating the method, and therefore we were cautious not to overinterpret our data.
With regards to the major comments, we wish to proceed as follows:
- Choice of endmembers and validation:
We appreciate and fully agree with this concern about the quality control of the surface types. We have checked for shifts in the Pleiades and Planet data with respect to Landsat, and have co-registered these images in COSI-Corr. This will be reported in the revised manuscript.
With regards to the choice of surface types, this was the most time-consuming part of the pre-processing, and it underwent multiple iterations. We clarify that we did not extract the spectral signatures from the Pleiades image but directly from Landsat using the pixel purity index routine in ENVI. Since this particular routine is proprietary to ENVI, we will explain it more detail in the revised manuscript. Were aware that these ‘pure pixels’ may still have a degree of mixture, and we will acknowledge this as a source of uncertainty in the revised manuscript. We mention here that for the pure pixels, we have selected homogenous surfaces based in the high-resolution data, which was co-registered as suggested by reviewer.
- Focus regions:
It is true that the distribution of the regions is not even. This is because we sampled known climatic regions, i.e., from west to east: a) the dry-arid monsoon transition zone; b) the central-eastern Himalaya and c) the heavily monsoon-influenced eastern extremity of the Himalaya. We will add one or two regions as suggested here. However, because these additional scenes will all be lacated in the central eastern Himalaya, we anticipate that differences in the composition might be due to interregional variability in lithology, for example, and may not help further inform the influence of the climatic gradient on the surface composition.
- Generalization of the method to all of the Himalaya:
We will validate the lake results at least one other sites of the Himalaya, depending on availability of Planet imagery close to the date of Landsat imagery. We do not have Pleiades imagery for the year 2015 from another site, so there might be a slightly higher uncertainty in the lake outlines we will derive from Planet (at 5 m), which will be discussed.
With regards to the transferability of the method: the idea here was to develop a single spectral signature in an area which includes a variety of lithology (i.e., Khumbu), to evaluate its performance over the entire range, and discuss the limitations of such an approach and ways to improve it in a subsequent study.
- Controls on supraglacial ponds
Yes, the ‘slope’ derivation here is the downglacier slope of the debris-covered section. For a revised manuscript, we will explore some further geomorphic analysis of the controls on pond incidence as suggested, such as looking at elevation bins. However, we note two limitations on the scope of the supraglacial pond control analysis we wish to present in this paper:
- as the surface composition, including the lake database, has not undergone manual corrections, we do not wish to over-interpret our results. Existing lake or debris cover databases, including the ones cited in this paper, typically undergo multiple iterations and improvement. A quality-controlled lake composition dataset, which would allow a full analysis of controls, would require a degree of randomised checking and potentially monaural adjustments, but this method can certainly form a starting point for such mapping. Here, our intent is to show the potential of the method to decompose the surface composition and notably the lake coverage.
- as noted by the reviewed lake extent is temporally variable and we offer only a snapshot, whereas a more complete analysis to deepen process-understanding would necessarily require investigations that go beyond the scope of the paper in our opinion, which we wish to focus on showcasing the potential of the method.
- Use of the SAM method:
We will remove this in the revised version of the manuscript. The SAM was an intermediary technique (hence its application over the entire image) - but we agree it is not relevant here.
Citation: https://doi.org/10.5194/tc-2020-372-AC1
-
RC2: 'Comment on tc-2020-372', Anonymous Referee #2, 24 Feb 2021
The manuscript “Surface composition of debris-covered glaciers across the Himalaya using spectral unmixing and multi-sensor imagery” by Racoviteanu et al. presents a very method-focused study that aims to distinguish different surface components of debris-covered glacier tongues in the Himalaya from readily available satellite imagery using a spectral unmixing approach. Although spectral unmixing is a well-developed technique, it has thus far not been extensively applied to debris-covered glaciers. The method is first implemented and evaluated for small part of the entire study area, i.e. the upper Khumbu region, using high resolution satellite imagery as reference. Defined spectral endmembers are subsequently used to apply spectral unmixing to the entire domain, which spans most of the Himalayan arc. Although there are various surface classes detected using the approach, from a geographic perspective the main focus of the paper lies on supraglacial ponds and (to a smaller extent) vegetation.
Given the increased attention in recent years to debris-covered glaciers and their cryospheric and hydrological importance, particularly in the High Mountain Asia region, the study presented is certainly of relevance and would be a valuable contribution to The Cryosphere. Although distributed modelling of debris-covered glaciers is still in its infancy, an improved understanding of the surface composition and its spatiotemporal dynamics will be crucial for accurate modelling of these glaciers at larger scale in the coming years.
Despite the clear merits of the work, the manuscript displays several major technical, structural, and interpretational issues in its present form that will require major revision before I would be able to recommend publication in The Cryosphere. I outline the most important ones below, and identify many other (also important) issues in the line-by-line comments.
In terms of structure, the manuscript does not always follow a logical flow. There are parts of the results that are more fitting for the methods, and complete new analyses and several figures introduced in the discussion. I therefore suggest the authors to restructure quite substantially. I also feel there is often a mismatch in the distribution of details among different components, especially in the methods. Some parts are described overly detailed, while other (often important) parts of the methodology are dismissed with a single sentence. Please refer to my line-by-line comments below where I identify several of these issues. I would, however, primarily suggest the authors to carefully reread their manuscript with this in mind.
The methods are both developed and validated for a single and relatively small subset of the entire domain over which they are applied. This is of course not ideal, particularly since the full domain is roughly 2000 km wide and considerable differences are to be expected over this large area. This could, for instance, be differences in lithological and morphological composition of the debris due to differences in geology and climate, atmospheric differences that could affect image corrections, differences in overpass time (i.e. solar zenith angles) etc. It would be very strongly recommended to seek further validation of the upscaling performed in the paper using additional high-resolution imagery outside the Khumbu region. Preferably far away, e.g. in Spiti Lahaul. Since acquisition of RapidEye by Planet Labs, academic access to the images is free. Additionally, almost all high-res satellite data (i.e. SPOT, WorldView, GeoEye, Quickbird and Pléiades) is accessible to European/Canadian researchers directly from archive (or even for tasking by submitting a small project proposal).
As mentioned above, the paper is method-focused and as such presents only (very) limited process-related analysis. Particularly for publication in The Cryosphere, I think it is important to include a more advanced analysis, and provide a better and more elaborate discussion in this regard. This would improve the paper and more clearly indicate to the readers the potential of the method as a basis or input for subsequent cryospheric/hydrological analyses. Currently the main focus lies with supraglacial ponds, and in principle this is fine, but the current analysis using a simple linear regression of glacier-wide aggregates is very limited and certainly not state-of-the-art. I am also uncertain about the validity of using linear regression in this case, and if the authors were to continue using this method they should assess and clearly indicate the assumptions that are made about the data and its distribution when applying this technique. I suspect there is considerable non-linearity in the relations between pond/vegetation and glacier characteristics, and other machine learning techniques could therefore be better suited here, for example Random Forest Regression. Furthermore, past studies have shown different elevation bands to have very different concentrations and distributions of supraglacial ponds (e.g. Kraaijenbrink et al., 2016; Miles et al., 2017; Ragettli et al., 2016), and the analysis at the glacier scale cannot incorporate these important specifics. I would therefore strongly suggest the authors to, instead of looking at entire glaciers, perform a lumped or distributed analysis of some sorts. I also think there are several additional variables that are worth exploring. Topographic ones, such as aspect, but there is also data about individual glacier change that would be valuable to link to (Brun et al., 2017; Dehecq et al., 2019; Shean et al., 2020). Finally, it would also be interesting and relatively straightforward to employ a more quantitative approach to the climate arguments presented by the paper, for example by including climatologies derived from ERA5 reanalysis data to the supraglacial pond analysis. Implementing things would allow to quantify many of the now qualitative statements, which would greatly benefit the message and value of the paper.
To summarise. I believe the manuscript displays an interesting, largely unexplored approach that could provide a valuable contribution. However, (i) the structure of the manuscript requires some reworking, (ii) validation outside the Khumbu region is necessary, (iii) a more rigorous analysis is required with respect to the supraglacial ponds.
Line-by-line comments:
L13. The presented study does not encompass the Hindu-Kush, so I would suggest to remove it.
L13. “cover mantle” -> remove either cover or mantle. I would suggest mantle.
L18. Landsat -> Landsat 8 OLI
L20. “We develop”, this implies that you developed the spectral unmixing technique yourself. Rephrase.
L22, L26. Use “classifications” instead of “maps”
L22. “finer classification maps”, how fine?
L22-26. Also mention more clearly in the abstract that you focus on the debris-covered part (as classified by Dirk Scherler) only.
L24. What does negligible mean here exactly, and if it is negligible, why were all these classes included?
L35. Again, suggest removal of “mantle”
L36. Would be good to include (Evatt et al., 2015) here
L39-41. No reference for this? (e.g. Nicholson and Benn, 2006; Østrem, 1959)?
L45. Pro- glacial -> pro-glacial. Also, why supraglacial without hyphen and pro-glacial with hyphen? Please be consistent.
L47. Pro-and -> pro- and
L58. Intraregional and regional differences and variability in rates of glacier change have become reasonably clear over the last years (e.g. Brun et al., 2017; Dehecq et al., 2019; Shean et al., 2020)
L63-67. Include (Herreid and Pellicciotti, 2020; Scherler et al., 2018) here.
L68. “Object-oriented” à “object-based”. Object-oriented image analysis (OOIA). Object-based image analysis (OBIA).
L73-74. Second part of sentence need to be rephrased.
L89. “Planet” is not a satellite, but a company. Pléiades is written with an accent aigu on the e. There is also SPOT, Worldview, GeoEye.
L92. (Kraaijenbrink et al., 2016) already showed big differences between UAV-derived ponds and RapidEye-derived ponds.
L92. “archive Landsat series” -> “the Landsat archive”
L93. still?
L94. The Landsat archive indeed spans five decades, but the 30 m data (TM, ETM, OLI) only four. Landsat 4 was launched in ’82 if I recall correctly.
L94. I would not necessarily call this a drawback, as it can be advantageous for some applications
L95-96. “which…sensor”. This is not a discriminating factor between full-pixel vs sub-pixel techniques, as they both utilize the same data picked up by the sensor.
L96-100. I cannot follow the logic here. First the authors mention little emphasis on spatial variation of pixel values and pixel neighborhoods, i.e. suprapixel, but provide examples that focus on the pixel internals, i.e. subpixel. Rephrase and/or explain better.
L105. Exploited -> explored
L112. “allow” -> could allow
L124-127. This is a bit out of place here, and should be expanded and moved to discussion.
L126. If the goal is to transfer the method to open source software, why has the procedure been built in ENVI in the first place? Throughout the methods there are a lot of (proprietary) ENVI algorithms and tools involved, which counters this statement.
L134-137. David Shean’s work should be added here (Shean et al., 2020)
L139-140. Rates of change of what exactly? Area, volume, debris-cover? Clearly specify this
L139. Use SI throughout. % per year -> % a-1
L151, L153. Quotes for A, B, C are not necessary
L160-L161. Reads as if Landsat is considerably worse than Sentinel-2, and clearly not the first choice. Remove or rephrase.
L165. Verb should be plural
L163-164. Although I understand this choice, it is rather tricky to assume that the debris surface is similar from year to year around the same time. This should be better acknowledged.
L171. Is “Pléiades 1A” the name of the satellite or the sensor?
L181. Mentioning “Planet satellite” is a bit odd here. Furthermore, RapidEye is a constellation of five satellites.
L182. What is the geodetic accuracy of this L3 ortho tile? These preprocessed products often have orthorectification issues in high relief terrain. How was this solved/accounted for?
L183-185. I would stick to pure data description here and not hint at the methods already using this sentence.
L186. Did you also consider the high-resolution HMA dem? Why, why not? (Shean, 2017)
L193-194. Here a whole analysis (which is introduced in the discussion) is dismissed with a single sentence. It should be properly outlined here in a separate section. Also see comments for the discussion.
L199. Remove “easily” and replace “high-mountains” with “study area”
L217-218. It is very tricky to assume that these parameters can simply be transferred to the other scenes that are thousands of km away and from different times of day, dates and/or years. It is, as the authors write in L208, a procedure that should be performed on an image basis. This should be better acknowledged here, and potential limitations should be clearly indicated. To my opinion, this also strongly endorses the importance of additional validation of the applied spectral unmixing results for areas outside of Khumbu (see main comments).
L221. Remove “basic”
L229-231. Italics are not necessary here
L240. which -> that
L252. I happen to know what a MNF transform does, but the large majority of readers of TC probably do not. It should be better explained and also discussed why this is necessary. It is also not clear to me whether it was used just to determine the dimensionality, or also to reduce noise by discarding MNF bands and/or to decorrelate the OLI bands (Meer and Jong, 2000). Proper references for this procedure are also necessary.
L253. “Pixel purity routine”, “the n-D visualizer” are very much ENVI terms and will not ring a bell with the readers. Since the endmember selection procedure is crucial for the entire analysis it should really be explained in full detail. Why were these tools used, to what effect, and what are the pros and cons. Also, it should somewhere be stated which version of ENVI was used, and whether it was ENVI classic or not.
L259-L263. I do not understand the flow and logic between these sentences. Please restructure.
L264. Not really “areas” if it is only one pixel. Also, picking one pixel does not mean it is not a mixel. Picking one pixel “reduces the chance of a mixed spectral signature in the region of interest of each endmember”.
L265. How do you account for spatial discrepancies between the OLI, Pléiades and RapidEye data? I have not read anything with respect to co-registration of the different scenes. In such a multi-sensor study co-registration is a crucial component of preprocessing, since otherwise it is not guaranteed that the images line up correctly. This would greatly impact the endmember selection and validation procedures and could undermine the entire study. Even after co-registration there will be errors that should be considered and acknowledged.
L265. “false colour composites”. Also, the band numbers are used often for all sensors, but they are not defined anywhere. Please add the bands, their no. and their spectral characteristics, e.g. wavelength and bandwidth/FWHM, to the dedicated table (Table 1).
L271-272. From my experience, turbid water can (at least in VIS) still look quite different from pond to pond, depending on the type of suspended sediment. From blueish (glacial silt) to reddish. How is that accounted for?
L287. “area” -> “an area”
L310. What is meant by “finer classification map”? What resolution, how was this done, to what purpose, how does this affect the analysis? This is crucial information that should be explained in detail. Also I think “map” can be removed as just “finer classification” suffices. For me, map has the connotation of being a spatial display of something with the primary purpose being presentation.
L313. “from the Khumbu” -> “that were derived for the Khumbu domain”
L313. I am not sure whether this strictly falls under upscaling, since the spatial support (i.e. Landsat pixel scale) remains the same. What about applying/extending/extrapolating/inferring?
L313. Composition does not seem the right word here. Classification?
L316-317. I find this too much detail. When something can be reproduced similarly in a plethora of ways and different software packages, it is not about the tools for the job, but purely about the method and approach. Also, the Python module of ArcGIS is called ArcPy, not ArcPython. And strictly speaking it would be simply Python scripting using the ArcPy module to invoke ArcGIS functionality.
L323. How is this iterative procedure performed exactly? How do you select new endmembers. Using the n-dimensional visualizer, or the PPI, or something else?
L324. These are not a lot of ground truthing points, to be honest. It is also important to know how these points were determined. It is somewhat vaguely stated that these are “well-distributed on several tongues”, but it is not clear how the points were generated/identified. To obtain a fair classification accuracy measure it is crucial that the validation points are not manually digitized, but randomly selected within the entire domain of the Pléiades/RapidEye images. To get a (more) even number of points among classes that strongly vary in size, a stratified random sample should be taken. This section requires more clarity about the exact procedures used to perform the accuracy analysis.
L328. OBIA is mentioned before, but never properly referenced using for example (Blaschke et al., 2014). I also find the description of the OBIA procedure to be quite lacking. What settings were used exactly? How did the image segmentation work? Was there any postprocessing done on the objects, e.g. splitting/merging? How were the lakes classified, manually or automatically using a decision tree approach? What was the accuracy of the OBIA classification? Without this information it is impossible for the reader to estimate the validity of the derived data for validation purposes.
L333. Remove “might have occurred”
L336-353. It is not completely clear to me why the SAM procedure was included, since the remainder of the manuscript focuses almost solely on LMM results. This paragraph mentions that the SAM results were used to test endmember choices, but it is not clear how this is done. (And this should be included in the methods, not the results section). I would suggest to expand this section and clearly describe to what purpose it was implemented, or remove the SAM entirely from the manuscript if that does not compromise other parts of the study.
L351. Is that is -> is that it
L362. Abbreviation for root mean square error should be RMSE, not RMS.
L361-363. Why did it have lower average RMSE? Was this due to a specific class mainly, or overall. How was the class-by-class performance difference? Maybe the average worse performer, performed better in more ‘important’ classes? Please elaborate.
L363-365. Two times roughly same sentence here.
L372-376. I find most of this to be more fitting for the methods section. Also, how were these seemingly arbitrary threshold values determined?
L372-376. Since multiple classes can be attributed to the same pixel using this multi-step thresholding of the fractional results, the order in which these threshold classifications are combined into a final product matter. That is, what will be the final class of a pixel when it falls within the thresholds for multiple fractional layers? It is not clear to me how this is done exactly.
L380. A remote sensing classification accuracy of 75% is frankly quite low (see e.g. Foody, 2008; Foody and Atkinson, 2002). For me, it really gives rise to the thought how other classification procedures might fare on the same data. Would a simple minimum distance supervised classification perform better or worse? Is it really beneficial to use this technique? Since the accuracies are low, particularly for the debris classes, I expect a very thorough and complete discussion about the limitations and capabilities of the method in comparison with possible other classification approaches.
L385-386. I do understand the argument that there is a link between the occurrence of a class and the classification accuracy.
L387. If it is heavily dusted, then it is not clear ice, right? That is, exposed ice != clean ice
L400-404. It might be good to add uncertainty ranges to these percentages, given the moderate classification accuracies.
L415-416. “5.6% of the debris area”. This is quite arbitrary because this number completely depends on the quality of the SDC dataset.
L420-421. If clouds are not present in the validation region, how were you able to assess the accuracy and confidently extend that to the other landsat scenes?
L424-426. There are several of these climatic ‘speculations’ in the manuscript, which could be substantiated by including some climate data (see also main comments)
L425. Of course it has to do with climate to some degree, but satellite images are snapshots and there is just a degree of luck involved regarding cloud cover. I would suggest not to over-analyze this.
L444. “latter two” is a bit odd here, since after the two that is being referred to there are other things still mentioned. Suggest rephrase.
L449. “OBIA image segmentation”. I would use either OBIA or image segmentation. This depends on whether you manually assigned the object to the water class or performed an automated procedure (an OBIA), which is not clear from the methods.
L454-455. Not sure whether it is fair to compare a water classification to a snow classification.
L459. Overestimated is one word.
L460. I am a bit puzzled by the binary pond area. Wouldn’t one of the benefits of having fractional subpixel information be that one could do analysis using those fractional values. First converting them to binary information seems to undo that. Is this then really better/different than a supervised classification or NDWI thresholding approach?
L461. OBIA analysis = object-based image analysis analysis
L467. “Good agreement” is subjective, needs to be quantified.
L478. Maybe add a line or two that helps to substantiate this presumption?
L488. I have seen snow patches on the debris in spring in the field and on satellite imagery, but not in the early post-monsoon period. I am not saying it is impossible, I only find it quite unlikely. Isn’t it clear from the rest of the Landsat scenes whether there are snow patches or not?
L500-512. As mentioned before, it would be a great addition to the manuscript to include climate data to really quantify this climate dependency instead of providing only speculation.
L511-512. As mentioned before, it would be a great addition to the manuscript to include data of glacier mass balance (Brun et al., 2017; Shean, 2017) and velocity (Dehecq et al., 2019) to substantiate these hypotheses.
L507. “Less glacier shrinkage” over what time period?
L511. Reference?
L522. Reference?
L524-566. I find it very odd to only introduce this analysis here, in the discussion. Although it is not part of the remote sensing and unmixing methodology, the methods used here should be added to a dedicated methods section and the results to a dedicated results section. I am not completely opposed to introducing figures in the discussion section, but introducing three new figures with results there is a bit odd. I would suggest to carefully reconsider the discussion and put any methods/results related parts in the correct sections.
L531. What constitutes a debris cover glacier tongue in this case? How does removing small tongues help to remove bare land patches. This part requires clarification. Also, 1 km2 is not big, but certainly not very small: 79566 of 95537 glaciers in Asia are smaller than 1 km2.
L532. So larger glacier tongues have more turbid supraglacial ponds?
L538. “For ex.”? Why not just the broadly accepted “e.g.”
L542. I am not very surprised that average glacier values do not show strong correlations since the supraglacial pond density is highly variable over a single glacier. It would probably be better to look at elevation bands, as other studies have also done (e.g. Ragettli et al., 2016).
L551. Quantify “in general”
L551. Seems more than 20% on the figure.
L556. Again, what is meant by “in general”
L559-561. I do not find this surprising: (i) looking at the scatter plots I highly doubt whether the assumptions that are made for linear regression are valid here, (ii) the signal is strongly subdued by looking at glacier-average values. Other machine learning approaches that can robustly deal with non-linearity might work better here, e.g. Random Forest.
L562-566. I am not sure whether Figure 12 and this small description add much to the analysis in its current state.
L524-566. Overall, I find this analysis quite lacking in rigour and novelty. With a few adaptations I think a much more interesting and valuable analysis can be performed (see main comments).
L574-575. It should be acknowledged here that the lake turbidity is temporally highly variable and, also given the uncertainties of the classification method, the satellite snapshots might therefore be difficult to use for this purpose. Spatial accuracy of the Landsat OLI data will also be a concern, as from acquisition to acquisition the pixels will be slightly misaligned, resulting in potentially very different ‘mixel’ compositions and unmixing results. This effect will be particularly strong for the relatively small ponds that are almost always adjacent to the spectrally very different debris pixels. This argument of course not only applies in this case, but also for the applicability of the entire approach with respect to multitemporal analyses. These limitations should be clearly stated and discussed in the discussion section.
L586. “outperforming”. I do not think that purely based on visual inspection of a 5 x 5 km subset of one of the major glacier tongues in the validation region of this study, which is a minute subset of the entire dataset, one can draw the conclusion that this method outperforms the other approaches. To make such claims there has to be some level of quantification and an assessment of much larger area.
L599. To my opinion, automated scalability to large regions is also an important limitation to consider.
L603-604. This gives the impression that it would be simple to transfer the unmixing parameters to the entire Landsat archive. This is not true because of differences that exist between sensors and bands, even though sometimes these are small: MSS != TM != ETM+ != OLI. For each sensor separate endmember selection will have to be performed and for older images this will not be trivial, given the lack of high-res calibration/validation data. I am not saying it is impossible, but these lines should be honest about the ease of transferability and the application of the method to historical imagery.
L608. As mentioned before, I would like to see this confidence validated for a region outside the Khumbu with additional high-res imagery.
L611. What is meant by “some post-classification corrections”? How will these be determined without validation?
L626. I have not read this before and couldn’t find it. I was under the impression that only turbid water was considered as endmember. Again, be strict about separating methods, results and discussion and do not introduce new methods in the discussion.
L632-635. I cannot follow the logic here. Please rephrase.
L636. reference for the bad performance?
L650-652. Successfully applied but not validated on accuracy.
L654. Important to mention, though, is that commercial high-res imagery was required for proper endmember selection. Also, I think detail alone is not the sole criterion on which performance should be assessed. Usability, scalability, ease of use, speed of implementation are all factors to consider.
L657-659. I don’t think this was confidently demonstrated in this study. Also what is meant by historical and more recent here? All images that were used are from ~2015.
L658 “imagers” -> “images”
L660-662. Yes, this seem to be true. But would just calculating a long-term NDVI composite and thresholding based on that not results in a much simpler approach that is as effective?
L665-666. Rephrase sentence, grammar incorrect.
L675. “other python-based routines”. Remove “other”, the ENVI approach was not a Python one. Capitalize “Python”, it is a name. Why just Python-based routines, it can probably be achieved using various programming languages? I would suggest to change this to “routines using open source software”
L685. Complement -> to complement
L687-688. I find this an odd last sentence. Would fit better somewhere in the introduction.
L697. “in ArcPython” -> “using the Python module ArcPy from ESRI ArcGIS”
Fig. 3. It would be good to map use the actual wavelengths on the x-axis for panel A.
Fig. 5. Both my printout and zoomed-in PDF have too low resolution of the grids, and details are not visible. Font size on the legend is also very small. Would be better to convert it into a full page 3x2 format.
Fig. 6. Similar comment as for fig 5. The small details are not discernable due to resolution/size issues.
Fig. 7a. What are the white blobs on the eastern moraine?
Fig. 9. The legend mentions transparent Pléiades outlines, but these details are not visible without zooming in a few 100%. Illegible on my (not bad) printout.
Fig. 9, L1169-1170. This is something for the results section, not for a figure legend.
Fig. 10. This figure does not add much to the analysis, in my opinion, and could easily be combined with figure 11.
Blaschke, T., Hay, G.J., Kelly, M., Lang, S., Hofmann, P., Addink, E., Queiroz Feitosa, R., van der Meer, F., van der Werff, H., van Coillie, F., Tiede, D., 2014. Geographic Object-Based Image Analysis - Towards a new paradigm. ISPRS Journal of Photogrammetry and Remote Sensing 87, 180–191. https://doi.org/10.1016/j.isprsjprs.2013.09.014
Brun, F., Berthier, E., Wagnon, P., Kääb, A., Treichler, D., 2017. A spatially resolved estimate of High Mountain Asia glacier mass balances, 2000-2016. Nature Geoscience 10, 668–673. https://doi.org/10.1038/ngeo2999
Dehecq, A., Gourmelen, N., Gardner, A.S., Brun, F., Goldberg, D., Nienow, P.W., Berthier, E., Vincent, C., Wagnon, P., Trouvé, E., 2019. Twenty-first century glacier slowdown driven by mass loss in High Mountain Asia. Nature Geoscience 12, 22–27. https://doi.org/10.1038/s41561-018-0271-9
Evatt, G.W., Abrahams, I.D., Heil, M., Mayer, C., Kingslake, J., Mitchell, S.L., Fowler, A.C., Clark, C.D., 2015. Glacial melt under a porous debris layer. Journal of Glaciology 61, 825–836. https://doi.org/10.3189/2015JoG14J235
Foody, G.M., 2008. Harshness in image classification accuracy assessment. International Journal of Remote Sensing 29, 3137–3158. https://doi.org/10.1080/01431160701442120
Foody, G.M., Atkinson, P.M., 2002. Uncertainty in Remote Sensing and GIS. Chichester.
Herreid, S., Pellicciotti, F., 2020. The state of rock debris covering Earth’s glaciers. Nature Geoscience 13, 621–627. https://doi.org/10.1038/s41561-020-0615-0
Kraaijenbrink, P.D.A., Shea, J.M., Pellicciotti, F., de Jong, S.M., Immerzeel, W.W., 2016. Object-based analysis of unmanned aerial vehicle imagery to map and characterise surface features on a debris-covered glacier. Remote Sensing of Environment 186, 581–595. https://doi.org/10.1016/j.rse.2016.09.013
Meer, F.V.D., Jong, S.M.D., 2000. Improving the results of spectral unmixing of Landsat Thematic Mapper imagery by enhancing the orthogonality of end-members. International Journal of Remote Sensing 21, 2781–2797. https://doi.org/10.1080/01431160050121249
Miles, E.S., Steiner, J., Willis, I.C., Buri, P., Immerzeel, W.W., Chesnokova, A., Pellicciotti, F., 2017. Pond dynamics and supraglacial-englacial connectivity on debris-covered Lirung Glacier. Frontiers in Earth Science 5, 1–19. https://doi.org/10.3389/FEART.2017.00069
Nicholson, L., Benn, D.I., 2006. Calculating ice melt beneath a debris layer using meteorological data. Journal of Glaciology 52, 463–470. https://doi.org/10.3189/172756506781828584
Østrem, G., 1959. Ice melting under a thin layer of moraine, and the existence of ice cores in moraine ridges. Geografiska Annaler 41, 228–230.
Ragettli, S., Bolch, T., Pellicciotti, F., 2016. Heterogeneous glacier thinning patterns over the last 40 years in Langtang Himal, Nepal. The Cryosphere 10, 2075–2097. https://doi.org/10.5194/tc-10-2075-2016
Scherler, D., Wulf, H., Gorelick, N., 2018. Global Assessment of Supraglacial Debris Cover Extents. Geophysical Research Letters 4–11. https://doi.org/10.1029/2018GL080158
Shean, D., 2017. High Mountain Asia 8-meter DEM Mosaics Derived from Optical Imagery, Version 1. https://doi.org/10.5067/KXOVQ9L172S2
Shean, D.E., Bhushan, S., Montesano, P., Rounce, D.R., Arendt, A., Osmanoglu, B., 2020. A Systematic, Regional Assessment of High Mountain Asia Glacier Mass Balance. Front. Earth Sci. 7. https://doi.org/10.3389/feart.2019.00363
Citation: https://doi.org/10.5194/tc-2020-372-RC2 -
AC2: 'Reply on RC2', Adina Racoviteanu, 05 Mar 2021
Initial Response to reviewer 2
We thank the reviewer for the thorough review of our paper and detailed and valuable line by line comments, which will all be addressed in the revised version of the manuscript. With respect to the 3 important issues identified by the reviewer, we will address these as follows:
1. Structure of the manuscript
Thank you for these suggestions. Clearly it is important that we have the methodological description clear and consistent, as we wish to demonstrate the utility of the method with this paper. Therefore, we will rework the paper to present a more balanced level of detail of the various aspects of the methods applied and the associated strengths/limitations. We will also move the analyses presented in the discussion to the results as requested and other improvements to the structure suggested within the line by line comments will also be applied.
2. Validation outside the Khumbu region
We are happy to add another validation site or two outside the Khumbu region, as also pointed out by Reviewer 1. We have searched for Planet imagery outside the Khumbu for the same year and as close as possible to the same days as the Landsat scenes. We have searched the ESA data for this year and unfortunately there are no Pleiades, SPOT or WV scenes for this year for any sites. Planet Imagery is limited in the western part (Lahaul-Spiti), with no acquisitions around the date of the Landsat image in 2015. We will therefore proceed with validation in 1 – 2 additional sites where suitable scenes are available.
With respect to the comment about differences in lithological and morphological composition of the debris, we are fully aware of these differences. We are also aware that, because we could not capture the full variability in lithologies and geologies in the study region using Landsat data, there may be uncertainties. With respect to atmospheric differences that could affect image corrections, as described in the manuscript, we tested the Dark Object Subtraction approach used in the ARCSI routine against ground data from two sources (CAMs and Aeronet), and the atmospheric profiles were then derived for each scene. We consider these to be the less important sources of errors but we will check again they are all addressed in our discussion of uncertainty. These uncertainties will be discussed more fully in the revised manuscript.
3. A more rigorous analysis is required with respect to the supraglacial ponds.
We thank the reviewer for these interesting ideas for further analysis. As we stated in the paper, given the limitations of the Landsat data, our intent with this publication was to focus on demonstrating the method, and therefore we were cautious not to overinterpret our data, as we believe that an in-depth process analysis would ideally be performed on a fully quality-controlled dataset, and potentially a multi-temporal one, given the known seasonal and interannual variability in lakes. That being said, additional analyses are of course possible using some of the readily available datasets mentioned by the Reviewer. Therefore, we will investigate including (a) glacier surface changes from Brun et al., (2019); (b) glacier velocity from Dehecq. et al., (2020) and (c) perhaps climate data as suggested (ERA5 or HAR).
With respect to the use of machine learning algorithms (i.e., RF), we consider this to be potentially promising, but beyond the scope of the current paper, which is already quite substantial in length. We prefer to recommend a full analysis with machine learning involving a quality-controlled dataset to remove any outliers or errors, which may be done in future versions of this dataset.
We welcome the reviewer’s suggestion to aggregate the data rather than to present the data by glacier-by-glacier. We remain to be convinced that aggregating the data by elevation bands is meaningful, because of the large spread in altitudinal range of the debris-covered tongues in this area, and uncertainty about how absolute elevation relates to the downglacier position of supraglacial lakes which might be a more meaningful process-orientated metric. However, we will explore this option by performing the analysis to ascertain if it yields more meaningful relationships, as well as testing the 1° gridding approach to aggregating data as used in Dehecq et al. (2020). This latter option would provide consistency with the existing datasets mentioned. We hope to include a new Figure in the revised paper, which will contain some aspects of these data and provide a broader context for the changes detected in our study.
Citation: https://doi.org/10.5194/tc-2020-372-AC2
-
AC2: 'Reply on RC2', Adina Racoviteanu, 05 Mar 2021