Dear authors,
I’d like to thank you for the extensive and well-addressed responses to the reviewers. It’s a treat to read well-thought-out comments and I’m happy to report that all my comments have been addressed – either completely or to some degree. I’m therefore happy to say that I support the publication of this manuscript after a few minor revisions.
In my previous review, my main issues revolved around the treatment of spatial scaling between ICESat-2 and CryoSat-2, and the alignment of the ellipsoidal heights of the identified CRYO2ICE observations correcting only for tidal adjustments. While the authors have done a great job investigating the impact and differences here, I’m not entirely convinced by some of the statements regarding the applicability of this treatment of spatial scaling. Hence, I’d like to ask the authors to consider the way the work is presented and open this up for discussion. I do believe that the methodology presented here is sound, but I worry that some of the statements are too rigorous and might be used “wrongly” in future studies when referenced. Thus, I urge the authors to consider, within the frame of these concerns, some of their statements and their choices for methodology highlighted below from the revised version of the manuscript or from the reviewer responses. Here, I believe discussing these points in the new section “4.3 Adjusting for the Difference in CS2 and IS2 Footprint” and in section “4.6 Surface Roughness and Cryo2Ice retrievals” would be enough.
Major comments
Backscatter distributions
Line 230-232. I appreciate the inclusion of backscatter across all strong beams and the point-to-point comparison that you included in Appendix (Figure G1) – I actually think this is of more value to include in the manuscript as a main figure, than only Figure 4 alone – perhaps you could include both.
However, I also want to point out that Site 4 shows large variations in retrieved backscatter (more than -5 dB and 3 dB) suggesting some real differences here, which supports your conclusion that a lot of stuff has happened here (e.g., ridging, surface features not well captured in the coarser satellite sampling) – perhaps include this aspect in your roughness analysis. I would even consider making a backscatter histogram of the dB differences at each site (and include this along with Figure G1)…
Spatial scales
I appreciate the authors work on evaluating the spatial autocorrelation with semi-variograms and the inclusion of an averaged product to 1 km. However, what I really want to open a discussion on is:
I. Do we believe that values of CryoSat-2 are “good” enough – in their own right – to track the surface so that one can derive meaningful statistics from observations in this native resolution, or to which extent (spatially) do we believe that the noise impacts?
II. Whether the semi-variogram you show truly presents an inflection point/convergence level at 1 km and some convergence after 300 m, when in fact you do not know what has occurred at shorter scales than 300 m due to the sampling of CryoSat-2? That is to say, you cannot really state that 300 m is where it convergences, when it is simply the first distance lag that you are presented with. Especially considering the large variability in your semi-variogram, does it truly reach an inflection point?
I’m mostly worried about discussion point II, is it appears that you state over landfast ice that the semi-variance after 300 m begins to become constant (but this is again your first lag-distance…) to indicate the spatial autocorrelation becoming negligible (and would be after 1 km). So, while I do not believe that you need to change your methodology, since the inclusion of the 1-km smoothing provides some insights into the impact of choosing CS2’s native resolution or not, I do believe you need to expand on this selection of footprints/smoothing scales and how applicable this is, in Section 4.3.
Minor comments
While the discussion and putting in perspective to results of previous studies provided in the conclusion is interesting (!), it should not be mentioned – as a first time – in the conclusion, but rather during your discussion. I encourage you to include it further up.
You included a new citation of a recent study (Fredensborg Hansen et al. 2024) and I believe you are citing it several times throughout the paper, but in different, mis-spelled, ways it appears. In line 50, you’ve written Freedensborg Hansne et al. (2024); in line 152 you’ve written Fosberg et al. (2024); line 464 it says Freesborgen Hansen et al (2024), and in line 296 and 474-475 you’ve correctly (according to your list of references) written Fredensborg Hansen et al. (2024). Please check and correct accordingly.
Data (section 2). You’ve not included any sub-section on the Sentinel-1 data that you have used here to the backscatter comparison and the background of several figures? Please include this along with any pre-processing applied, and please include information about which polarizations you use (also, check for consistency that the same is used, e.g. Figure G1 and Figure 8 seem to either use different polarisations or colour-scales? Also, on these figures, please include the colourbar to show the range of dBs), and describe what you would expect over the landfast ice in terms of backscatter changes here.
Also, some of the figures are still not of great quality and the text at times is too small to read (e.g., lat/lon on maps). I urge you to have another look at that!
Line 14-17. Does this in fact mean, that it is not only the first assessment of snow depth from CRYO2ICE over lead-less ice, but a first assessment of any dual-frequency snow depth over lead-less ice?
Line 20. “after applying an ocean tide correction (…)” - “after applying an ocean tide correction based on comparison with tide gauges (…)”
Line 23. “significant” to “significantly”
Line 26. “attributing” to “attributes”
Line 26-27. When you say surface roughness, do you mean snow surface roughness or ice surface roughness – or both?
Line 37. “coincident” to “monthly composites of “: I’d be careful with the use of “coincident” here, as this is not really the case for the monthly estimates (there is some degree of spatial and temporal overlap, but in general they do not see the exact same ice).
Line 49. “A few hundred kilometers”? Fredensborg Hansen et al. (2024) shows overlap of more than 500 km, up to almost 1000 km for some tracks when consistent ICESat-2 and CryoSat-2 data was present. Perhaps change to “hundreds of kilometers”. It sounds like there is little data available along transect, which is truly representative.
Line 80. Remove “~” before “150 photons”. I’m not aware of a change to the ATL07 methodology to not be exactly 150 photons, but maybe it has?
Line 82. “For this study, the ATL03”… I’m questioning this segment-length value based on a response to reviewer, where it was stated that you got 36 ATL07 segments within each 300-m segment, resulting in ~8.3 m segment-length. Is this the case for all your segments, or is this in fact an average of number of segments along your ~75-km track? That would suggest that you get exactly the same photons across the entire track, which seems unlikely. Could you clarify how this was computed?
Line 103. “over sea ice in the SARIn mode” to “over sea ice floes in the SAR/SARIn modes”. The re-tracking over leads is different! (although not relevant for your study, but important nonetheless).
Line 144. Could you include a reference to data/where you observed this showing the high pressure during this period?
Line 150. Sentence seems to stop abruptly. Remove “to” before references, perhaps?
Line 161. Re-check formula for refractive index! Should be ^1.5, right?
Line 179. How did you get this 6 cm difference? Tide gauge or from the tide models? It’s not entirely clear from the updated text.
Line 186. You state here that it is a semi-variogram of the in-situ snow depths, but the semi-variogram that you have shown in response to reviewers was for the CRYO2ICE snow depths? If that is true, it is striking that they both reach an inflection point at ~1 km as you conclude, although I must say I don’t fully believe the semi-variogram of CRYO2ICE at 1 km fully shows a deflection/convergence at 1 km due to the sampling of CryoSat-2 and the variability (as discussed in Major comments). If you have semi-variograms of both, I strongly encourage you to include it!
Line 231-232: I would probably change this sentence to reflect that in the majority of cases, the assumption is likely valid. But, in your point-to-point comparison you show some variability (which might be significant!).
Section 4.1 seems a bit short or insufficient, perhaps? Is there not more you can state when comparing with former studies? Perhaps on snow depth variability along the transect (e.g., your standard deviation compared with the ranges observed)? Expectations regarding the periods in question (beginning or end of late winter etc.)?
Figure 7. You did not update this to include the 1-km averages too? I strongly encourage you to do so.
Line 312-315. You mention here the semi-variogram, but do not show it either as a main figure or in Appendix? I strongly recommend you do include it.
Line 374. What 100-km-averaged product?
Lines 376-387. Interesting discussion! It appears to me, that there is then some compensating biases between CryoSat-2 and ICESat-2 when smoothing to the 1 km length scales, since very smooth or very rough ice appears to have the largest differences, but transition zones (Site 1 and 2) match well. Interesting what could be driving this in the processing.
Figure 7 and 12. I do wonder whether instead of just a line, you could shade the “area” of equivalent coverage (the roughness segments/equivalent along-track coverage you show in the maps), to highlight the observations compared in the Site-specific plots.
Section 4.6. Since this is surface roughness from ICESat-2, it is essentially snow roughness – and here, it is based on the Gaussian Width of the photon distributions, if I am not mistaken. I think it would be worth including a short paragraph on how well you believe this roughness parameter actually represents the roughness of the surface that you’ve encountered.
Line 474. Could it also be because the negative snow depths in Fredensborg Hansen et al. (2024) are not at individual CS2 footprints, but rather at the 7-km averaged windows that you mention?
Line 484. “(…) overestimation” to “(…) overestimation within vicinity of significantly ridged ice”, or something similar to highlight that this was observed where you had large heterogeneity in the ice surface.
Line 490. Remove “centimeter level” or “few centimeters”.
Line 495. Remove “finer” after IS2. Not sure what is meant here. Perhaps “high-resolution” instead or something similarly, if that is was is hinted at? |
Review of “Snow Depth Estimation on Lead-less Landfast ice using Cryo2Ice satellite observations” by Saha et al.
Summary:
The study assesses the potential for near-coincident ICESat-2 and Cryosat-2 (Cryo2Ice) satellite data in estimating snow depth over landfast ice in the Canadian Arctic Archipelago. Snow depth is retrieved by calculating the absolute difference in surface height from the two satellites, considering an ocean tide correction. The study compares the retrieved snow depths from Cryo2Ice with in-situ measurements, showing good agreement in terms of mean values. However, Cryo2Ice snow depths were, on average, underestimated by 20.7%. Discrepancies are attributed to differences in sampling resolutions, snow characteristics, surface roughness, and tidal correction errors. The results suggest the potential for estimating snow depth over lead-less landfast sea ice, but further investigation is needed to understand biases related to sampling resolution, snow salinity, density, surface roughness, and altimeter correction errors.
General Comments:
This is an interesting study, which will be valuable to improve our understanding with respect to retrieving snow depth from a dual-altimeter approach. I had no problems to follow the paper, but I believe clarity can be improved. However, there are some parts in the analysis and discussion, which I think need some clarification and revision. I think this work deserves publication, but major revisions are needed.
My main concerns are:
Specific Comments:
L101: Figure 3 is introduced before Figure 2.
L113: For the snow depth measurements, what was your sampling strategy? Can you be more specific here? Did you walk straight transects? Did you ensure representative sampling, considering the fraction of deformed sea ice?
L157: The MSS is not mentioned under 2.6. I suggest to briefly explain the reason here.
L164: To my knowledge, the ATL07 product does not contain the individual photon heights, but segments of different length that aggregate the photon heights from the ATL03 product. I assume you have used these segments?
L167: Are the retrieved Cryo2Ice snow depths not arranged along a straight line? Why then investigating spatial autocorrelation? Isn’t it nearly 1D? Moreover, when I look at Fig. 8 (bottom plot), I find it hard to imagine how this works. The sample size is not very high and there is a lot of noise. And the spacing between point is already 300 m. Can you show a variogram? (Just in the response, does not need to go into the manuscript).
L196: That’s a nice approach with the Sentinel-1 backscatter. I suggest checking the “stability” and representativeness of the ICESat-2 heights, making use of the other beams. Just compare the height distributions from the 3 strong beams for the area of interest.
Figure 4: I suggest changing the legend. It is misleading. It looks like the backscatter of IS2/CS2 is shown here…
L206: Is this related to Figure 11? May be show this together with Figure 4? Farrell et al. (2020) primarily use ATL03. The ATL07 segments can be quite long. How many segments do you get on average within the 300 m segments? Can you derive a meaningful roughness from this?
L236: Figure 7 -> Figure 6?
Figure 5: The blue line is not explained.
L251: I don’t see the negative values in Figure 8. I suggest adding a class with a specific colour for values <0. From Fig. 7, it does not look like negative values primarily occur close to the coasts.
L252: I would argue that with removing negative values, you introduce a positive bias in the snow depth retrieval. It will only make sense if you assume that underlying uncertainties affect the snow depth exclusively in one direction. But looking at Figure 7, it just seems that there is significant noise on the CS2 heights, which goes in both directions (positive and negative).
L295: I haven’t fully understood why this test is done. “The test results show significant difference between in-situ sites which was also evident in the corresponding Cryo2Ice snow depths.” Which are the corresponding Cryo2Ice snow depths? I guess there are just a handful in the vicinity of each site?
L299: Related to the previous question: How many Cryo2Ice snow depths are you using for the comparison?
Figure 10: I suggest showing the “raw” distributions, not the density functions. Again, how many Cryo2Ice samples have been used at each site for the PDFs?
Figure 7: It would be also interesting to see the IS2 heights from the co-registration, averaged on the 300 m segments. Perhaps you can add them here?
Figure 8: I suggest adding the mean and standard variation from the in-situ measurements at the 4 sites.
L363: R**2 = 0.04 basically means no correlation I believe. But considering the noise level, especially from the CS2 heights, and the relatively low sample size, I wouldn’t expect a higher R here.