Winter Arctic sea ice thickness from ICESat-2: upgrades to freeboard and snow loading estimates and an assessment of the first three winters of data collection
- 1Cryospheric Sciences Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA
- 2Earth System Science Interdisciplinary Center, University of Maryland, College Park, MD, USA
- 3University of Toronto, Toronto, Canada
- 4ADNET Systems Inc., Bethesda, MD, USA
- 1Cryospheric Sciences Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA
- 2Earth System Science Interdisciplinary Center, University of Maryland, College Park, MD, USA
- 3University of Toronto, Toronto, Canada
- 4ADNET Systems Inc., Bethesda, MD, USA
Abstract. Reliable basin-scale estimates of sea ice thickness are urgently needed to improve our understanding of recent changes and future projections of polar climate. Data collected by NASA’s ICESat-2 mission have provided new, high-resolution, estimates of sea ice freeboard across both hemispheres since data collection started in October 2018. These data have been used in recent work to produce estimates of winter Arctic sea ice thickness using snow loading estimates from the NASA Eulerian Snow On Sea Ice Model (NESOSIM). Here we provide an impact assessment of upgrades to both the ICESat-2 freeboard data (ATL10) and NESOSIM snow loading on estimates of winter Arctic sea ice thickness. Misclassified leads were removed from the freeboard algorithm in the third release (rel003) of ICESat-2 freeboard data, which increased freeboards in January and April 2019, and increased the fraction of low freeboards in November 2018, compared to rel002. These changes improved comparisons of sea ice thickness (lower mean biases and standard deviations, higher correlations) with monthly gridded thickness estimates produced from ESA’s CryoSat-2 (using the same input snow and ice density assumptions). Later releases (rel004 and rel005) of ICESat-2 ATL10 freeboards result in less significant changes in the freeboard distributions and thus thickness. The latest version of NESOSIM (version 1.1), forced by CloudSat-scaled ERA5 snowfall, has been re-calibrated using snow depth estimates obtained by NASA’s Operation IceBridge airborne mission. The upgrade from NESOSIM v1.0 to v1.1 results in only small changes in snow depth which have a less significant impact on thickness compared to the rel002 to rel003 freeboard changes. Finally, we present our updated monthly gridded winter Arctic sea ice thickness dataset and highlight key changes over the past three winter seasons of data collection (November 2018–April 2021). Strong differences in total winter Arctic thickness across the three winters are observed, linked to clear differences in the multiyear ice thickness at the start of each winter. Interannual changes in snow depth provide significant impacts on our thickness results on regional and seasonal scales. Our analysis of recent winter Arctic sea ice thickness variability is provided online in a novel Jupyter Book format to increase transparency and user engagement with our derived gridded monthly thickness dataset.
- Preprint
(5681 KB) -
Supplement
(1323 KB) - BibTeX
- EndNote
Alek Aaron Petty et al.
Status: final response (author comments only)
-
RC1: 'Comment on tc-2022-39', Anonymous Referee #1, 06 Apr 2022
Review Petty et al, Winter Arctic sea ice thickness from ICESat-2:
upgrades to freeboard and snow loading estimates and an assessment
of the first three winters of data collection.
## General commentsThe paper assesses the impacts of a number of changes to ICESat-2
ATL10 processing and to the NESOSIM snow model on estimates of
along-track and gridded sea ice freeboard and ice thickness. This
assessment is important for users of high level sea ice products such
as ATL20 gridded sea ice. Overall the paper is well conceived and
written. However, there are a nuber of issues that need to be
addressed before the paper is ready for publications. I list these
below. I also have a number of specific comments.Overall, the quality of the figures is good. However, some of them
could be improved by adding descriptive titles/labels to each panel.
For example, figurure 7 has titles but these appear to be file
variable names. Rather than "ice_thickness_unc", it would be more
helpful to readers to have "Ice thickness uncertainty" spelled out.
Likewise with panel (i) "ice thickness int" would be better as
"Interpolated ice thickness". The authors might also want to think
about a better layout and if all panels are necessary.The Jupyter notebook is an excellent addition as is making the code
available.Different releases are used for different evaluations. The authors
show that there is little difference between releases 003 through 005
but it would make for a cleaner, and more up to date, analysis to use
release 005 throughout. The only exception being to show differences
between releases 002 and subsequent releases.I would like to see a map in the main paper showing the "Inner Arctic
Ocean" region as the study region introduced as part of the methods.
This would focus readers attention on the analysis region up front.Figure 8 is another example of a figure that would benefit from having
labels such as a) sea ice freeboard. Parameter names are on the
y-axes but they are small. Panels a, b, etc should be referenced in
the text.There are a number of places in the text where important statements
are put in parentheses. I think it would improve readability to
rewrite these statements as part of the main text. Some of these
parenthetical statements are unnecessary.## Specific comments
L60. "is *being* developed"
L63. Suggest "collected to estimate sea ice thickness"
Section 2. I think it would be helpful to summarise upgrades to IS2
processing, NESOSIM and ATL20 gridding in a simple table.L111 prefer "km" to be consistent.
L124 "0-3 cm freeboard changes at basin scales". Does "basin-scales"
refer to the Inner Arctic region used in the current paper? Maybe say
"an increase in basin average freeboard of up to 3 cm.L139 Suggest "New releases of ATL07 and ATL10 also reflect upgrades to
the underlying ATL03 processing, such as improvements in geolocation.L141 and 110. ATBD for ATL07/10 use "surface reference" rather than
"reference sea surface". To avoid confusion it might be better to use
the same terminology as the ATBD.Figure S1. Would it be better to have this figure in the main text?
Also, the point here is that the number of reference surfaces is
reduced from rel002 to rel003 because dark leads are not used.
However, the count difference is positive. It make more sense to me
to have this reduction as a negative number.L190 Effectively the /beta and /gamma terms in equation 1 are
corrections to solid precipitation. It is not clear to me what the
difference is between the two terms. They could be combined into a
single loss coefficient.L217 Do you mean "For each OIB snow depth product, snow depths are
binned into 100 km grid cells using a drop-in-the-bucket averaging
procedure. For each grid cell, the median snow depth of the three
products is then assigned as the grid cell snow depth". So in all
cases, you are taking the middle value. If the number of products was
larger, I can see this as an acceptable approach to
avoid outliers but for just three values, you can't really identify
an outlier. It would seem that the mean is a better estimator.L 230. "within reason" This needs some clarification. Are there
limits you can set on depth or start date?Figure 3. The left panel is busy. I suggest having a separate panel
for October and April. The horizontal grid-lines should be lighter or
removed.L254 One of the arguments for not using the Warren climatology for
snow depth is that it is not represenative of the present day
conditions. The previous paragraph and Figure 3 have been used to
argue that recent years snow depth are also lower than average and may
be declining. So why would you use a climatology of NESOSIM.
Wouldn't using output from an operation product or low latency
reanalysis be a better option?L266. The redistribution method needs a reference.
L296 The smoothing/gridding procedure needs more explanation. It
would be helpful to say why each of the steps are done. Why use
Delaunay triangulation - generally this method is used to interpolate
unstructured data? Presumably the KDTree algorithm is to speed
up the search for neighboring cells.Figure 4, L343. How do these look for other months and for other years? No need to show
them but a comment in the text would be helpful.L354. Significant or major?
L356. Prefer peak rather than mode. Mode could be confused with
operating mode.Figure 4. How many segments are used to generate these plots? Are
dark leads more common in November?L370. The name NESOSIMv1.1clim has not been introduced yet.
L389. Suggest "In Figure 6, we show the correlation coefficients,
mean bias and standard deviations of ICESat-2 monthly gridded ice
thickness from rel002 and rel003 compared with ESAs CryoSat-2."What are the standard deviations of?
Why mask data less than 0.25 m?Figure 6. I suggest removing the shading and, for each month, plot
release 002 and release 003 as separate columns. That way you can see
the ovelap. The shading suggests the data is continuous rather than
discrete monthly data.L445. NESOSIM presecribes snow density for new and old snow. The
bulk density is a weighted average of these two values. How much can
be read into variations in density?Figure 8. Why is sea ice concentration lowest in October? Is this an
artifact of averaging.Figure 9. The flow vectors obscure the thickness data. They are not
really discussed. Are they necessary? Could they be relegated to
supplemental material?Line 535. Care needs to be taken with ERA5 (or any reanalysis)
near-surface variables over snow. ERA5 snow parameterisation is still
a single layer, which does not produce realistic surface fluxes
(Arduini et al 2019).L540. Are three years of data enough to make a statement about
strength of coupling?L591. This seems to contradict what is shown in Figure 4.
Figure 12 and 13. The multi-year ice fraction panel is not needed.
Arduini, G., Balsamo, G., Dutra, E., Day, J. J., Sandu, I., Boussetta,
S., & Haiden, T. (2019). Impact of a Multi-Layer Snow Scheme on
Near-Surface Weather Forecasts. Journal of Advances in Modeling Earth
Systems, 11(12), 4687–4710. https://doi.org/10.1029/2019MS-
AC2: 'Reply on RC1', Alek Petty, 12 May 2022
The comment was uploaded in the form of a supplement: https://tc.copernicus.org/preprints/tc-2022-39/tc-2022-39-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Alek Petty, 12 May 2022
-
AC1: 'Comment on tc-2022-39', Alek Petty, 08 Apr 2022
We would like to note that after submission of our preprint, a paper that analyzed at threee years of winter Arctic thickness with ICESat-2 but using CryoSat-2 radar freebaord measurements to infer snow depth was published in GRL: https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2021GL097448. A different approach to our method of using a snow accumulation model to constrain the time-varying snow depth (and density too in our case).
Our preliminary analysis has indiciated a very high degree of agreement between those estimates and the final rel005, gridded monthly mean results reported in our study in terms of mean snow depth, thickness and the ~50 cm multiyear ice thinning.
We expect to add a note on this to the revised manuscript.
Alek Petty
-
RC2: 'Comment on tc-2022-39', Anonymous Referee #2, 14 Apr 2022
This paper discusses improvements in the ICESat-2 (IS2) processing of sea ice thickness retrievals from different releases of the IS2 products. As such it feels more like a NASA technical report that discusses how the different versions change the thickness retrievals. The author previous published in 2020 on the processing chain to IS2 and I do not find that with the changes this now warrants an updated assessment of thickness changes and a new publication. The question is what do we really gain from this paper vs. having a NASA technical report on the changes in data processing?
This is in part because any sea ice thickness (SIT) assessment depends strongly on the choice of snow loading used. It also depends strongly on the choice of snow depth processing applied to OIB data for validation of your snow loading, and the seasonality of this validation period. It seems that with the changes presented to NESOSIM there are minimal changes anyway to the snow loading and thus it is the changes to the lad detection that seem to have the largest influence. To make this paper more impactful and not just a technical report on updates to IS2 data processing, one way forward could be to assess the choice of snow loading in the IS2 SIT retrievals. Since Zhou et al. (2020) already showed how different these data products can be, and other studies such as Mallett et al. (2021) and Glissenaar et al. (2021) detailed how using different snow loading can lead to different trends, one really cannot trust any assessment of thickness changes over the 3 years evaluated here without addressing the uncertainty in the snow loading. How would Figures 8-10 look using different snow data sets for example? You state that it’s the freeboard processing that results in the largest changes (again indicative that this should be a technical report), but given the wide variety of snow depth data sets out there, the 3 years analysed here may be quite different depending on data set applied. And is analysing 3 years of data really useful for assessing drivers of SIT variability? At the moment I really do not see much value in having this as a publication in The Cryopshere for an incremental update to the IS2 processing chain. That doesn’t mean it shouldn’t be published someplace, but The Cryosphere should be for more impactful papers.
More specific major comments:
It is stated that NESOSIM is updated to use ERA5 calibrated against CloudSat and a new blowing snow term. However, there is no validation of this blowing snow loss term, or discussion on how the coefficients, i.e. wind action threshold, blowing snow loss coefficient and atmosphere snow loss coefficients are derived and validated. There is no in situ evidence that a significant amount of snow is lost to leads in the winter (any lead in winter quickly refreezes in a matter of a few hours), and there is no assessment here of the magnitude of this new snow loss term, and comparison to the old (and presumably still used) snow loss term to leads. Since SIT retrievals depend very strongly on the snow loading, at a minimum some quantitative analysis is needed on what these changes represent in terms of the overall snow mass, and some science justification is needed for doing this in the first place. It seems that some artificial tuning is based on trying to reduce the mean difference with OIB snow depths, but of course those are not perfect either. And they are done only in the springtime, and the question is how valid this bias-correction is for other months during the winter season?
The author is wrong about what SM-LG does at the end of summer as it keeps the snow cover in places where it doesn’t entirely melt out. Also, snow can start to accumulate before September in the Arctic, and thus it seems these changes are made purely to reduce your bias but there is no physical reason to justify these changes. I do not think that because NESOSIM matches mW99 in October that you can conclude you have “good” snow depths. In fact given delays in freeze-up, I would expect much thinner snow in October compared to mW99 based on the fact that ice is forming later than it used to.
Zhou et al. (2020) showed large differences between the various atmospheric reanalysis-based approaches to snow loading as well as the remote sensing-based retrievals, with the SM-LG (Liston et al. 2020) providing more spatial structure to the snow depth/density distributions, whereas products such as NESOSIM are artificially smoothed products. I see you get around this by taking your smoothed products and then adding some artifical spatial structure to match IS2 resolution, but why regrid to 100km in the first palce? Anyone who has spent time on sea ice knows the snow is very heterogeneous and thus the artificially smoothed 100km NESOSIM product seems unrealistic. Some justification for regridding the snow depth to 100km is needed and why you think this artificially smoothed data set is a good representation of snow over sea ice. Also, the impact of the redistribution then to 30m resolution is needed.
Some assessment of the impact of using different ice motion products is also needed. It is not true that updated ice motion from NSIDC is not available, and the author could have contacted the data provider for updated ice motion fields. Since OSI SAF and NSIDC ice motion vectors to not agree, how does this influence your results? It is also unclear now how the Warren et al. climatology is used, are you assigning MYI snow depths on September 1 based on W99 and then accumulating snow? And finally, I’m not sure why so much smoothing is applied to both the snow and SIT retrievals, and some justification for this is needed. What does your SIT data product really give us if so much smoothing is applied? Snow and ice are highly spatially variable and thus is this a data product that is really useful to the community if it is artificially smoothed? Wanting “pretty” maps is not a reason to do this.
I do not find much value in the CS2 to IS2 comparison. In particular, now suddenly the mW99 climatology is applied after spending much time discussing updates to NESOSIM. This seems to be only because you want to use existing products out there, which we already know are not realistic because they do not have a realistic snow loading representations. Instead, maybe comparison of the freeboards would be a better thing to do, as you can convert the IS2 snow freeboards to ice freeboards with your snow loading from NESOSIM. Then we can better understand differences on the ice freeboard level, and may be get some insights into where the dominant scattering surface from CS2 is located as well as the influence of surface roughness on the freeboard retrievals. The use of PIOMAS is also not useful in my opinion, it’s a model and has known biases, so adding it here just distracts from the overall paper.
The abstract is too long and reads more like a technical report.
-
AC3: 'Reply on RC2', Alek Petty, 12 May 2022
The comment was uploaded in the form of a supplement: https://tc.copernicus.org/preprints/tc-2022-39/tc-2022-39-AC3-supplement.pdf
-
AC3: 'Reply on RC2', Alek Petty, 12 May 2022
Alek Aaron Petty et al.
Alek Aaron Petty et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
536 | 174 | 21 | 731 | 40 | 13 | 13 |
- HTML: 536
- PDF: 174
- XML: 21
- Total: 731
- Supplement: 40
- BibTeX: 13
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1