Dear authors,
I’d like to thank you for the extensive and well-addressed responses to the reviewers. It’s a treat to read well-thought-out comments and I’m happy to report that all my comments have been addressed – either completely or to some degree. I’m therefore happy to say that I support the publication of this manuscript after a few minor revisions.
In my previous review, my main issues revolved around the treatment of spatial scaling between ICESat-2 and CryoSat-2, and the alignment of the ellipsoidal heights of the identified CRYO2ICE observations correcting only for tidal adjustments. While the authors have done a great job investigating the impact and differences here, I’m not entirely convinced by some of the statements regarding the applicability of this treatment of spatial scaling. Hence, I’d like to ask the authors to consider the way the work is presented and open this up for discussion. I do believe that the methodology presented here is sound, but I worry that some of the statements are too rigorous and might be used “wrongly” in future studies when referenced. Thus, I urge the authors to consider, within the frame of these concerns, some of their statements and their choices for methodology highlighted below from the revised version of the manuscript or from the reviewer responses. Here, I believe discussing these points in the new section “4.3 Adjusting for the Difference in CS2 and IS2 Footprint” and in section “4.6 Surface Roughness and Cryo2Ice retrievals” would be enough.
Major comments
Backscatter distributions
Line 230-232. I appreciate the inclusion of backscatter across all strong beams and the point-to-point comparison that you included in Appendix (Figure G1) – I actually think this is of more value to include in the manuscript as a main figure, than only Figure 4 alone – perhaps you could include both.
However, I also want to point out that Site 4 shows large variations in retrieved backscatter (more than -5 dB and 3 dB) suggesting some real differences here, which supports your conclusion that a lot of stuff has happened here (e.g., ridging, surface features not well captured in the coarser satellite sampling) – perhaps include this aspect in your roughness analysis. I would even consider making a backscatter histogram of the dB differences at each site (and include this along with Figure G1)…
Spatial scales
I appreciate the authors work on evaluating the spatial autocorrelation with semi-variograms and the inclusion of an averaged product to 1 km. However, what I really want to open a discussion on is:
I. Do we believe that values of CryoSat-2 are “good” enough – in their own right – to track the surface so that one can derive meaningful statistics from observations in this native resolution, or to which extent (spatially) do we believe that the noise impacts?
II. Whether the semi-variogram you show truly presents an inflection point/convergence level at 1 km and some convergence after 300 m, when in fact you do not know what has occurred at shorter scales than 300 m due to the sampling of CryoSat-2? That is to say, you cannot really state that 300 m is where it convergences, when it is simply the first distance lag that you are presented with. Especially considering the large variability in your semi-variogram, does it truly reach an inflection point?
I’m mostly worried about discussion point II, is it appears that you state over landfast ice that the semi-variance after 300 m begins to become constant (but this is again your first lag-distance…) to indicate the spatial autocorrelation becoming negligible (and would be after 1 km). So, while I do not believe that you need to change your methodology, since the inclusion of the 1-km smoothing provides some insights into the impact of choosing CS2’s native resolution or not, I do believe you need to expand on this selection of footprints/smoothing scales and how applicable this is, in Section 4.3.
Minor comments
While the discussion and putting in perspective to results of previous studies provided in the conclusion is interesting (!), it should not be mentioned – as a first time – in the conclusion, but rather during your discussion. I encourage you to include it further up.
You included a new citation of a recent study (Fredensborg Hansen et al. 2024) and I believe you are citing it several times throughout the paper, but in different, mis-spelled, ways it appears. In line 50, you’ve written Freedensborg Hansne et al. (2024); in line 152 you’ve written Fosberg et al. (2024); line 464 it says Freesborgen Hansen et al (2024), and in line 296 and 474-475 you’ve correctly (according to your list of references) written Fredensborg Hansen et al. (2024). Please check and correct accordingly.
Data (section 2). You’ve not included any sub-section on the Sentinel-1 data that you have used here to the backscatter comparison and the background of several figures? Please include this along with any pre-processing applied, and please include information about which polarizations you use (also, check for consistency that the same is used, e.g. Figure G1 and Figure 8 seem to either use different polarisations or colour-scales? Also, on these figures, please include the colourbar to show the range of dBs), and describe what you would expect over the landfast ice in terms of backscatter changes here.
Also, some of the figures are still not of great quality and the text at times is too small to read (e.g., lat/lon on maps). I urge you to have another look at that!
Line 14-17. Does this in fact mean, that it is not only the first assessment of snow depth from CRYO2ICE over lead-less ice, but a first assessment of any dual-frequency snow depth over lead-less ice?
Line 20. “after applying an ocean tide correction (…)” - “after applying an ocean tide correction based on comparison with tide gauges (…)”
Line 23. “significant” to “significantly”
Line 26. “attributing” to “attributes”
Line 26-27. When you say surface roughness, do you mean snow surface roughness or ice surface roughness – or both?
Line 37. “coincident” to “monthly composites of “: I’d be careful with the use of “coincident” here, as this is not really the case for the monthly estimates (there is some degree of spatial and temporal overlap, but in general they do not see the exact same ice).
Line 49. “A few hundred kilometers”? Fredensborg Hansen et al. (2024) shows overlap of more than 500 km, up to almost 1000 km for some tracks when consistent ICESat-2 and CryoSat-2 data was present. Perhaps change to “hundreds of kilometers”. It sounds like there is little data available along transect, which is truly representative.
Line 80. Remove “~” before “150 photons”. I’m not aware of a change to the ATL07 methodology to not be exactly 150 photons, but maybe it has?
Line 82. “For this study, the ATL03”… I’m questioning this segment-length value based on a response to reviewer, where it was stated that you got 36 ATL07 segments within each 300-m segment, resulting in ~8.3 m segment-length. Is this the case for all your segments, or is this in fact an average of number of segments along your ~75-km track? That would suggest that you get exactly the same photons across the entire track, which seems unlikely. Could you clarify how this was computed?
Line 103. “over sea ice in the SARIn mode” to “over sea ice floes in the SAR/SARIn modes”. The re-tracking over leads is different! (although not relevant for your study, but important nonetheless).
Line 144. Could you include a reference to data/where you observed this showing the high pressure during this period?
Line 150. Sentence seems to stop abruptly. Remove “to” before references, perhaps?
Line 161. Re-check formula for refractive index! Should be ^1.5, right?
Line 179. How did you get this 6 cm difference? Tide gauge or from the tide models? It’s not entirely clear from the updated text.
Line 186. You state here that it is a semi-variogram of the in-situ snow depths, but the semi-variogram that you have shown in response to reviewers was for the CRYO2ICE snow depths? If that is true, it is striking that they both reach an inflection point at ~1 km as you conclude, although I must say I don’t fully believe the semi-variogram of CRYO2ICE at 1 km fully shows a deflection/convergence at 1 km due to the sampling of CryoSat-2 and the variability (as discussed in Major comments). If you have semi-variograms of both, I strongly encourage you to include it!
Line 231-232: I would probably change this sentence to reflect that in the majority of cases, the assumption is likely valid. But, in your point-to-point comparison you show some variability (which might be significant!).
Section 4.1 seems a bit short or insufficient, perhaps? Is there not more you can state when comparing with former studies? Perhaps on snow depth variability along the transect (e.g., your standard deviation compared with the ranges observed)? Expectations regarding the periods in question (beginning or end of late winter etc.)?
Figure 7. You did not update this to include the 1-km averages too? I strongly encourage you to do so.
Line 312-315. You mention here the semi-variogram, but do not show it either as a main figure or in Appendix? I strongly recommend you do include it.
Line 374. What 100-km-averaged product?
Lines 376-387. Interesting discussion! It appears to me, that there is then some compensating biases between CryoSat-2 and ICESat-2 when smoothing to the 1 km length scales, since very smooth or very rough ice appears to have the largest differences, but transition zones (Site 1 and 2) match well. Interesting what could be driving this in the processing.
Figure 7 and 12. I do wonder whether instead of just a line, you could shade the “area” of equivalent coverage (the roughness segments/equivalent along-track coverage you show in the maps), to highlight the observations compared in the Site-specific plots.
Section 4.6. Since this is surface roughness from ICESat-2, it is essentially snow roughness – and here, it is based on the Gaussian Width of the photon distributions, if I am not mistaken. I think it would be worth including a short paragraph on how well you believe this roughness parameter actually represents the roughness of the surface that you’ve encountered.
Line 474. Could it also be because the negative snow depths in Fredensborg Hansen et al. (2024) are not at individual CS2 footprints, but rather at the 7-km averaged windows that you mention?
Line 484. “(…) overestimation” to “(…) overestimation within vicinity of significantly ridged ice”, or something similar to highlight that this was observed where you had large heterogeneity in the ice surface.
Line 490. Remove “centimeter level” or “few centimeters”.
Line 495. Remove “finer” after IS2. Not sure what is meant here. Perhaps “high-resolution” instead or something similarly, if that is was is hinted at? |