the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
An indicator of sea ice variability for the Antarctic marginal ice zone
Download
- Final revised paper (published on 10 Oct 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 27 Oct 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on tc-2021-307', Anonymous Referee #1, 25 Jan 2022
This is my review of Vichi (2021).
Primarily, I want to apologize for the very long time in returning a review. I hope the author accepts my apology for this, while it is a challenging year to meet professional obligations, this simply was too much time to wait for what was a relatively short and easy-to-read paper.
In this paper, the author seeks an alternative definition of the marginal ice zone (MIZ). They use the distribution of inter and intra-monthly standard deviation of passive microwave SIC values, defining a MIZ metric as those periods where the standard deviation of SIC retrievals within a given month exceed 0.1 (unitless). Their key result is that when applied to the existing PM-SIC data, four main satellite products are in agreement as to a climatological seasonal cycle of overall MIZ extent. This is a new approach. It is clear that the MIZ requires an objective definition, and the author is making an effort to provide one.
Despite these intentions, I find methodological and conceptual flaws in the study that I do not believe permit its publication at this stage, and I recommend significant revisions be undertaken before reconsidering this MS. Generally, what this article is lacking is supporting evidence. Many of the claims made by the author about this definition *could indeed be true*, and it may have immense promise as a definition of the MIZ. But there is no supporting evidence that this definition records something physically relevant to modelers, stakeholders, or observers. With this supporting information, the paper is quite a useful and interesting contribution. But absent it, it is hard to make much of this work.
Here I give a discussion on the merits of this work, focusing on this problem of physical and statistical foundation. I am not including specific small comments because I think any revision of this MS will require substantial changes that may render such comments obsolete. Below I include two overacting suggestions which I believe should be undertaken before this paper is published.
Discussion
————————————————————————————
The study’s motivation is that existing ways to define the MIZ do not capture the physical properties of the sea ice in the Southern Ocean: “I reassess the assumption that absolute values of sea ice concentration contain information on the sea-ice type in the Antarctic…”. Throughout the MS, the author makes reference to waves, free drift sea ice, ice types, dynamical processes, “sea-ice textures”, etc, which, to be sure, might not co-vary with sea ice concentration sensed via PM and play a key physical role in Antarctic sea ice evolution. Yet the author provides no supporting information that (a) indeed, the 15-80% threshold does not co-vary with these core sea ice physical properties, or that (b) a \sigma threshold is better, or is related to “ice type” at all.
For example, this statement in the discussion: “the proposed analysis will map relative differences between ice types, even if the specific ice type cannot be classified”. But how is this true? But what, other than anomalous variability in reported SIC, is actually being measured by this metric? Why does this have anything to do with ice type, and what is the author actually referring to here by “ice type”?
The author does not provide a physical basis for *how* the MIZ should be defined, anyways, using different terminology at different points throughout before settling on (L281) “variability”. Their variability is by construction the anomalous temporal variability of PM-SIC retrievals. But what the author also emphasizes, as tends to be the case in the literature, is that the MIZ is characterized visually by horizontal variability, i.e. in terms of floe-to-floe heterogeneity, not necessarily temporal variability. Why one should be interchanged with the other is not clear.
The evidence supporting the use of this new definition is in part that all four products agree on a climatological seasonal cycle of MIZ extent. The NOAA/NSIDC CDR product used here is simply the maximum value of the NT/BT algorithms (https://doi.org/10.7265/efmz-2t65). Thus the apparent spread in algorithms presented in Fig 5a is in part artificial as NT/BT largely agree, and the CDR product must be smaller than both by definition and should not be compared. As for why the OSI-SAF product produces a more wide distribution of SICs, this has its own substantial literature (e.g. Kern 2019/2020). These algorithms also agree on other metrics too, like SIE. So a global metric with agreement is not altogether all that motivating - there are ways that we know these algorithms all agree, and it may be that the metric you obtain is covariant with one of those. Still, figuring out whether the agreement is “real” requires some further work.
First, it is not necessarily clear they are agreeing for the right reasons: it would be useful to check the marginal ice zone fraction (Horvat, 2021) in concert with the MIZ extent (Rolph et al 2020), as this illustrates whether this agreement is consistent with the same sea ice coverage in general. As the author indicates the use of \sigma can give rise to broaden extents, is it possible that this is covariant with larger \sigma-MIZs? Additionally looking at the spatial coherence of the MIZ definition between different products will also indicate if the \sigma value is the same locationally, or if the definitions agree only when integrated globally.
Further, the author clearly notes that two processes can give rise to high values of \sigma: broad-scale thermodynamic processes that cause the ice edge to retreat/expand, or pixel-scale variability (perhaps caused by storms, though this is not spelled out in detail). There is no exploration of which actually drives this change, but it is sorely needed: a physical driver over \sigma values should be foundational to its definition. As mentioned, one very important thing we do know is that all PM-SIC algorithms largely agree on Antarctic sea ice area and extent - so it is possible they also should have similar retreat/extent patterns of the sea ice edge. If this is the leading cause of elevated \sigma values, then the algorithms would agree - \sigma values are simply reflecting a synoptic change which could equally well be observed in the SIC values alone. It might be easy to check this, too - if all monthly values are declining or increasing, then the variability being measured is expansion or retreat of the ice edge, and not intra-monthly heterogeneity in the sea ice.
I could, for example, propose a wholly different metric: what if you produced daily maps of the SIC-threshold MIZ (i.e. identified points with 15-80% SIC every day), and averaged this binary indicator over each month instead of defining the threshold on the monthly climatology? How different would this look from the “variability” metric, e.g. in Fig.s 3-4? Why is this metric any better or worse?
Finally, there is no discussion of the influence on retrieval uncertainty on \sigma results, and there ought to be. Such errors directly impact the variability measure but will not impact the SIC thresholding (unless occurring at 15\% or 80\% SIC), which is why extent and the MIZ are designed in the way they are. There can be immense variability day-to-day, and errors for non-compact ice can be high. Without a formal assessment of the impact of measurement uncertainty, it is not possible to asses whether there is any true variability being measured. A particular problem raised in the PM observational literature is the “truncation” of SIC estimates (see Kern et al 2019) - most algorithms frequently can return SIC > 1, and then set SIC > 1 to 1. But this can bias the statistics of metrics like \sigma, and shouldn’t because it reflect a real “observation”. The OSI-450 product is a good choice here because it actually reports the true SIC estimate, which can be used in your assessment of the variability and extent (this field is raw_ice_conc_values in OSI-450 output).
Finally, the discussion circles around the meaning of variability without doing any direct comparison to other observations. I have mentioned the many asides to MIZ physics and ice types, which is not reflected in the product itself (and is readily admitted by the author, see L270), nor supported in the analysis. These should be a major part of what makes this definition useful, but they do not support its inclusion.
Suggestions
————————————————————————————
I make two overarching suggestions which I hope would render this article a significant contribution to the sea-ice literature.
First, the author should relate the new definition to some physical properties of the sea ice cover relevant for those who might be interested in this definition. It is true that the current MIZ definition was simply defined operationally. But an alternate definition should have additional reasons for its suggestion. This would require the use of alternate data, i.e. a case study in a particular region with imagery, or similar, to give evidence that high \sigma regions are indeed compatible with a physical definition of the MIZ. Datasets on sea ice age, floe size, waves, surface roughness, etc, all do exist and could be used to further this effort.
Second, the author should separate the aforementioned sources of variability into that due to ice-edge retreat, real inter-monthly variability in ice conditions, and PM uncertainty. This is necessary to know whether \sigma actually contains useful information or is just reflecting uncertainty at the ice edge. Perhaps it is! That might be a useful back-door way of observing the MIZ, but without knowing it is impossible to do more than speculate.
If the proposed MIZ definition can be better grounded physically, and its relationship to sources of uncertainty that are not MIZ-related can be sorted out, I think that would make a publishable contribution to the scientific literature.
References:
————————————————————————————
Ivanova et al. Inter-comparison and evaluation of sea ice algorithms: towards further identification of challenges and optimal approach using passive microwave observations. 2015.
Kern et al Satellite passive microwave sea-ice concentration data set inter-comparison for Arctic summer conditions (2020)
Kern et al Satellite passive microwave sea-ice concentration data set intercomparison: closed ice and ship-based observations (2019)
Horvat. Marginal ice zone fraction benchmarks sea ice and climate model skill. (2021).
Rolph et al. Changes of the Arctic marginal ice zone during the satellite era. 2020.
Citation: https://doi.org/10.5194/tc-2021-307-RC1 -
AC1: 'Reply on RC1', Marcello Vichi, 30 Mar 2022
Note to the reviewers
I would like first to apologise for the delay in providing an initial response to the comments. This is attributed to unfortunate timing, since the reviews came back at the very start of the academic year in the Southern Hemisphere. I had to wait for the end of the first term to find the proper time for focusing and adequately addressing the critical comments and helpful suggestions.
In addition, I have been working on an update of the manuscript using the newly released version 4 of NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration ( https://doi.org/10.7265/efmz-2t65) . There has been a major change in this version, which is now released with the default application of temporal and spatial filters. Users would not be able to reproduce the same results from the pre-print, and the version 3 dataset is not available any more. The difference between the NSIDC CDR and OSI-450 that I highlighted in the pre-print is now much reduced, because both datasets are released with the application of various types of filters and interpolations. As indicated by Reviewer 1 and further addressed in the specific answers, I am going to include in the revised version an analysis of the impact of the various filters on the diagnosis of SIC variability and the detection of MIZ characteristics.
The delay in this response was then additionally caused by my correspondence with the NSIDC data producers to make sure that it is possible to reproduce the version 3 results from version 4 (in addition to some small errors that I found in this new version that will likely be corrected in revision 4.1 of the dataset). This is still unresolved, but I am confident that it will be finalised by the time of submission of the revised version of this manuscript.General answer to Reviewer 1
I would like to thank Reviewer 1 for the in-depth comments and critical appraisal of this work. This manuscript stems from the intent to ignite a discussion on the definition of the marginal ice zone with a direct application to Antarctic sea ice. The nature of the comments made me realise that, for the sake of avoiding a lengthy review of the literature in the introductory section, I failed to address the physical basis and supporting evidence of why a complementary (if not more appropriate) definition of the MIZ is required in the Southern Hemisphere. I have also realised the importance of a proper definition of what I mean by ice type and variability, and how they are related to concentration, which is the only long-term information that we have available. In the revision I will make a substantial effort to explain why a threshold-based indicator of MIZ characteristics may lead to erroneous definition of sea-ice characteristics, that will be helpful in designing observational campaigns and in the implementation of sea-ice parameterizations in models. I will also give a demonstration of the explanatory power of the proposed diagnostics, offering a few case studies of direct observations from the South African cruises in 2017 and 2019.
I do recognise the limited details given on the treatment of the statistical foundation and the analysis of errors, which will be included in the revised version as I further explain in the specific comments below. I am very thankful to the reviewer for the excellent suggestions made. I have started to implement some of them and I am confident that this
In the following text, I have copied the specific comments from RC1 in italics and my answer is in normal text.
RC1: The study’s motivation is that existing ways to define the MIZ do not capture the physical properties of the sea ice in the Southern Ocean: “I reassess the assumption that absolute values of sea ice concentration contain information on the sea-ice type in the Antarctic…”. Throughout the MS, the author makes reference to waves, free drift sea ice, ice types, dynamical processes, “sea-ice textures”, etc, which, to be sure, might not co-vary with sea ice concentration sensed via PM and play a key physical role in Antarctic sea ice evolution. Yet the author provides no supporting information that (a) indeed, the 15-80% threshold does not co-vary with these core sea ice physical properties, or that (b) a \sigma threshold is better, or is related to “ice type” at all.
Answer: This is a major oversight of the present version. I made a quick reference to some of the evidence in the literature that justify the need to expand the threshold-based MIZ definition (Alberello et al., 2019; Vichi et al., 2020). They are mostly from the team of researchers I have been working with and from results obtained during winter campaigns in the Atlantic Southern Ocean. This list will be further expanded in the revised version, with the addition of more recent publications which indicate that portions of the ocean with more than 80% sea ice cover do show characteristics that are attributable to the MIZ (Womack et al., 2022, de Jager and Vichi, 2022).
RC1: For example, this statement in the discussion: “the proposed analysis will map relative differences between ice types, even if the specific ice type cannot be classified”. But how is this true? But what, other than anomalous variability in reported SIC, is actually being measured by this metric? Why does this have anything to do with ice type, and what is the author actually referring to here by “ice type”?
Answer: I acknowledge that I overlooked these concepts, by assuming that they had common meaning and went straight to the application of the methodology. I have now realised the importance of giving more context on how sea ice is described in direct observations and how these features of sea-ice heterogeneity do not co-vary with SIC. This will include considerations from the Expert Group on Antarctic Sea-ice Processes and Climate (ASPeCt), as well as the WMO codes. Ice type is indeed an ambiguous term, which is used differently in different contexts and not necessarily linked to thickness and/or concentration. I will add additional references on the role of frazil ice and how more relevant it is in the Southern Ocean in determining retrieval of 100% ice cover in MIZ conditions. As opposed to the Arctic, supercooled water in a turbulent environment leads to surface accumulation of grey-white ice, which prevents pancakes to cement thus allowing wave penetration in the interior. Small changes in SIC close to 100%, which very unlikely would be constrained to the 15-80% threshold, would then be indicative of the presence of MIZ processes.
RC1: The author does not provide a physical basis for *how* the MIZ should be defined, anyways, using different terminology at different points throughout before settling on (L281) “variability”. Their variability is by construction the anomalous temporal variability of PM-SIC retrievals.
But what the author also emphasizes, as tends to be the case in the literature, is that the MIZ is characterized visually by horizontal variability, i.e. in terms of floe-to-floe heterogeneity, not necessarily temporal variability. Why one should be interchanged with the other is not clear.Answer: I agree that in the submitted manuscript there is room for improving the terminology and the background definitions. It is important to acknowledge that the traditional 15-80% threshold is operational and not derived from physical bases. Earth Obs measurements from PM cannot retrieve the physical features, which I propose here to indirectly address through the analysis of variability. Variability is however a generic term and I agree that it needs to be better constrained in the revised manuscript. In Antarctic sea ice literature there is limited knowledge about the typical physical conditions of the MIZ throughout the year and beyond the Weddell Sea or Ross Sea regions. Summer conditions have been observed more often during the relief voyages. In the revised version I will restructure the introduction to clarify the complexity of identifying the physical bases towards a better definition. In the current manuscript I have reported the definitions from textbook and papers, in which sea ice in the MIZ is “assumed” to be in open pack conditions, with higher chances of drifting ice and the penetration of gravity waves due to the floes being smaller than the wavelength. In the MIZ there are also higher chances that the wave field is responsible for setting the sea-ice thickness. However, I do recognise that all these descriptions are not accurate definitions and are often indirectly derived. This increases the complexity of demonstrating the validity of any measure of MIZ feature.
In the revised manuscript I will explain that variability encompasses a variety of cases in the real MIZ, at least according to the most recent observations in winter. I will also explain why an ergodic hypothesis is a natural basis for constructing a metric, especially when comparing with numerical models based on the continuum hypothesis. Spatial and temporal heterogeneities are a combination of different processes, which are expressed in the concept of the total or material derivative. The reason why these inhomogeneities are important is that air-sea ice-ocean exchanges of gas and momentum are enhanced or dampened according to the surface features. This may not however be the best way to describe sea ice, as indicated by the growing number of models including floe-floe interactions and Lagrangian dynamics. I will expand the discussion to include possible ways to use these results to assess model results and guide us towards the improvement of model parameterizations.
RC1: The evidence supporting the use of this new definition is in part that all four products agree on a climatological seasonal cycle of MIZ extent. The NOAA/NSIDC CDR product used here is simply the maximum value of the NT/BT algorithms (https://doi.org/10.7265/efmz-2t65). Thus the apparent spread in algorithms presented in Fig 5a is in part artificial as NT/BT largely agree, and the CDR product must be smaller than both by definition and should not be compared. As for why the OSI-SAF product produces a more wide distribution of SICs, this has its own substantial literature (e.g. Kern 2019/2020). These algorithms also agree on other metrics too, like SIE. So a global metric with agreement is not altogether all that motivating - there are ways that we know these algorithms all agree, and it may be that the metric you obtain is covariant with one of those. Still, figuring out whether the agreement is “real” requires some further work.
First, it is not necessarily clear they are agreeing for the right reasons: it would be useful to check the marginal ice zone fraction (Horvat, 2021) in concert with the MIZ extent (Rolph et al 2020), as this illustrates whether this agreement is consistent with the same sea ice coverage in general.Answer: I have grouped these comments together since they all pertain to the quality of the products and how the proposed indicator reduces the spread in measuring the MIZ extent.
I am not entirely sure what the reviewer means by saying that the spread obtained using the SIC threshold is artificial. It has been previously observed (Stroeve et al., 2016) and it comes from applying the same methodology to products that intend to represent the same physical feature. The CDR product is slightly more complex than the maximum value between the BT and NT algorithms especially at the sea-ice edge. To my knowledge, it is a product meant to be an improvement on the individual algorithms. The rationale behind this choice is that PM algorithms tend to underestimate concentration during the summer melt season (Meier et al., 2014). Since greater underestimation is typical in the NT algorithm, then the CDR implements a 10% cutoff of the BT field and then maximises the values between the two. This means that all values lower than 10% from the BT product are not included in the CDR. I do not agree with not comparing the CDR against the individual products, because this choice (driven by considerations on the summer ice conditions) does have an impact on the MIZ estimation. These concepts will be added to the revised manuscript.There is evidence that estimates of MIZ extent from BT and NT do not agree (Stroeve et al. 2016). It is true that they agree on the overall SIE, but the aim of this work is to analyse the MIZ features in Antarctic sea ice. I think the arguments raised by the reviewer are partly reinforcing the conclusions I draw in this work. Despite the known limitations of each product, they all retrieve a similar measure of “variability” in sea ice, which translates into a similar estimate of the climatological MIZ extent. There is more agreement in the seasonality than found with the SIC threshold because the use of anomalies removes some of the biases of the various algorithms. This is to me an indication that there is an underlying physical meaning that goes beyond the technical limitations of each algorithm.
This does not mean that the extent obtained through the \sigma is a better estimate of the MIZ. I clearly did not explain properly that the proposed indicator is not meant to substitute the estimates of SIE in the MIZ, because it is not directly comparable with the standard pack ice SIE, or other measures as the MIZ fraction proposed by Horvat (2021). This will be improved in the revision. I thank the reviewer for suggesting the recent paper by Horvat that was published when my analysis was already completed. The same reasons indicated in that paper have been driving my search for a complementary measure, and I will add these considerations in the revision.
My indicator gives additional information to what Horvat is proposing, and would likely help to further assess climate models. I used the binary mask to provide evidence of a much more extended region of variable sea ice that presents conditions more akin to the MIZ. I understand that this is misleading since I used the area as a simpler measure to relate it to the MIZ extent computation. This is clearly shown in Fig. 7, in which the number of pixels affected by higher variance of SIC is larger than the number of pixels that would be classified as sea-ice covered based on the SIE criterion. This has also been flagged by the other reviewer as a part that needs improvement, and it will be amended in the revision.
I finally acknowledge that a global metric is not the only reason why this diagnostic should be considered as a complement to the SIC-based estimates. What does this agreement mean and how it changes over years and regions is a question that I intend to explore further and add to the revised manuscript.RC1: As the author indicates the use of \sigma can give rise to broaden extents, is it possible that this is covariant with larger \sigma-MIZs? Additionally looking at the spatial coherence of the MIZ definition between different products will also indicate if the \sigma value is the same locationally, or if the definitions agree only when integrated globally.
Answer: I thank the reviewer for this comment. A spatial analysis will be added to the revised manuscript. There could be more variability in the regions where the band of MIZ is larger, and this could further help to understand the differences between the products.
RC1: Further, the author clearly notes that two processes can give rise to high values of \sigma: broad-scale thermodynamic processes that cause the ice edge to retreat/expand, or pixel-scale variability (perhaps caused by storms, though this is not spelled out in detail). There is no exploration of which actually drives this change, but it is sorely needed: a physical driver over \sigma values should be foundational to its definition.
Answer: It is explicitly indicated in the manuscript that this analysis addresses both the seasonal advance/retreat of sea ice and the local variability induced by atmospheric forcing. I am indeed referring to extratropical cyclones, and I will make this relation more clear.
As mentioned in one of the previous answers, the current definition of the MIZ is not based on specific physical drivers, and the proposed indicator is still based on SIC data from space. Thermodynamic and dynamic processes both contribute to changes in the temperature brightness, which is the only proxy we have. Only in situ data from a large-scale observational system that combines drift and thermodynamic fluxes will produce the proper database to separate these components. These experiments have been done in the Arctic, but not yet in the Antarctic MIZ. I agree with the reviewer that this is much needed, but I argue that it would not still be possible based on the existing data from Antarctic sea ice.
It is instead possible to separate the role played by synoptic scales from the variability associated to advance/retreat. This is currently the work of a PhD student I am supervising, who published an initial analysis on the association between atmospheric anomalies and sea ice (Hepworth et al., 2022). She is applying a similar methodology, but focusing on the 5-7 days scale of polar cyclones. I will mention in the revised discussion that a follow up work will analyse the two different drivers.RC1: As mentioned, one very important thing we do know is that all PM-SIC algorithms largely agree on Antarctic sea ice area and extent - so it is possible they also should have similar retreat/extent patterns of the sea ice edge. If this is the leading cause of elevated \sigma values, then the algorithms would agree - \sigma values are simply reflecting a synoptic change which could equally well be observed in the SIC values alone. It might be easy to check this, too - if all monthly values are declining or increasing, then the variability being measured is expansion or retreat of the ice edge, and not intra-monthly heterogeneity in the sea ice.
Answer: If I interpret this comment correctly, it implies that the agreement presented in Fig. 5 is indicative of this method being capable of capturing the seasonal advancement/retreat in a more consistent way. If that would be observable in the absolute value of SIC alone, then the threshold-based estimates would agree. My argument is that the use of the threshold inevitably restrict the extent of the MIZ and its north-south progression, because certain regions of sea ice with SIC > 80% are not accounted for, while they have been observed to be classified as MIZ (as I will explain in the revised introduction referring to the existing literature).
The proposed indicator is indeed meant to capture the seasonal progression of the MIZ across the Southern Ocean as well as the intra-monthly heterogeneity. I realise that this is however only explained in the method section, and will now be introduced earlier as one of the main aims of the work.RC1: I could, for example, propose a wholly different metric: what if you produced daily maps of the SIC-threshold MIZ (i.e. identified points with 15-80% SIC every day), and averaged this binary indicator over each month instead of defining the threshold on the monthly climatology? How different would this look from the “variability” metric, e.g. in Fig.s 3-4? Why is this metric any better or worse?
Answer: I agree that this method would add an intensity to the binary mask provided by the SIC threshold, thus making it more similar to the proposed indicator. However, binary indicators based on the 15-80 threshold would still not detect changes when sea ice is above 80%, a condition often found in the MIZ (please refer to the first two comments for how I will expand on this issue in the revised manuscript). I do see the point made by the reviewer and will provide a discussion on other possible alternatives in the revised version of the manuscript. I am glad that this manuscript is leading to a further search for alternative ways of determining the MIZ state. Such an indicator could be useful to detect the type of seasonal progression. For instance, if we assume a linear increase in sea ice over a month from 0% to 100%, the average of this binary indicator will tend to 0.65. Other fractions may be indicative of different types of seasonal growth conditions.
RC1: Finally, there is no discussion of the influence on retrieval uncertainty on \sigma results, and there ought to be. Such errors directly impact the variability measure but will not impact the SIC thresholding (unless occurring at 15\% or 80\% SIC), which is why extent and the MIZ are designed in the way they are. There can be immense variability day-to-day, and errors for non-compact ice can be high. Without a formal assessment of the impact of measurement uncertainty, it is not possible to asses whether there is any true variability being measured. A particular problem raised in the PM observational literature is the “truncation” of SIC estimates (see Kern et al 2019) - most algorithms frequently can return SIC > 1, and then set SIC > 1 to 1. But this can bias the statistics of metrics like \sigma, and shouldn’t because it reflect a real “observation”. The OSI-450 product is a good choice here because it actually reports the true SIC estimate, which can be used in your assessment of the variability and extent (this field is raw_ice_conc_values in OSI-450 output).
Answer: The reviewer is entirely right on this part, which has been investigated during my analisys but not adequately reported. I agree that this uncertainty may have a higher impact on the indicator than on the threshold method. In the revised version I will include a comparison with the mean total error, to show that the \sigma signal is higher than the retrieval error. Specifically, I will use monthly root mean square error as an indicator of retrieval uncertainty.
I will also add a section to show how the use of “filters” like capping, ocean pixels and time/space interpolation may impact the application of the method . This is now necessary since the version 4 of the dataset I used in the present manuscript implements these filters by default. This analysis will then be extended to the OSI-450 product, because my conclusion on the difference between the standard OSI-450 and NSIDC V4 is not valid anymore (both products are now released with the interpolated field).RC1: Finally, the discussion circles around the meaning of variability without doing any direct comparison to other observations. I have mentioned the many asides to MIZ physics and ice types, which is not reflected in the product itself (and is readily admitted by the author, see L270), nor supported in the analysis. These should be a major part of what makes this definition useful, but they do not support its inclusion.
Answer: This improvement will be made in the revised version. Please see the answer to the first suggestion below.
Suggestions from Reviewer 1
RC1: First, the author should relate the new definition to some physical properties of the sea ice cover relevant for those who might be interested in this definition. It is true that the current MIZ definition was simply defined operationally. But an alternate definition should have additional reasons for its suggestion. This would require the use of alternate data, i.e. a case study in a particular region with imagery, or similar, to give evidence that high \sigma regions are indeed compatible with a physical definition of the MIZ. Datasets on sea ice age, floe size, waves, surface roughness, etc, all do exist and could be used to further this effort.
I thank the reviewer for this suggestion, which will be implemented in the revised manuscript. I acknowledge the succinct description that I provided in the introduction about the need to ground the alternate definition on physical conditions observed in the MIZ. I agree that descriptive MIZ features can be obtained from literature, although these datasets are unlikely to be comprehensive enough in a spatial and temporal sense. I have selected a few examples that include direct floe size measurements, satellite imageries and reported sea-ice visual conditions and I will use them to assess the value of the indicator and the difference with respect to the operational MIZ classification. I am not aware of any dataset or product related to sea ice age in Antarctica. To the best of my knowledge, the NSIDC age product (: https://doi.org/10.5067/UTAV7490FEPB., Tschudi et al, 2020) is only available for the Arctic.
The assessment will be done considering the climatological intent of determining regions of higher variability that is at the basis of this approach. A comparison with instantaneous observations (e.g. SAR images) or short-term cruises will need to be put into context, and this will be done accordingly in the revision. Furthermore, this indicator could be used to detect persistent artefacts in SIC retrieval (e.g. Lam et al., 2018, for an example of an erroneous persistent polynia in a region where such a feature is absent). Regions of known permanent sea-ice cover (as for instance regions of M-Y ice) may show unrealistically high variability which can be identified by using the proposed anomaly method.RC1: Second, the author should separate the aforementioned sources of variability into that due to ice-edge retreat, real inter-monthly variability in ice conditions, and PM uncertainty. This is necessary to know whether \sigma actually contains useful information or is just reflecting uncertainty at the ice edge. Perhaps it is! That might be a useful back-door way of observing the MIZ, but without knowing it is impossible to do more than speculate.
Answer: As explained in the answers to some of the comments above, the aim of this paper is to jointly analyse the two main sources of variability in Antarctic MIZ: seasonal advance/retreat and subseasonal changes driven by synoptic atmospheric features. This will be made more explicit in the revised version. I acknowledge the importance of including an analysis of PM uncertainty, which has been left out from the current manuscript. This will be further extended to consider the impact of the various filters (capping, interpolation, etc.) on the indicator (this is essential to use the new version 4 of the NSIDC dataset).
I would however prefer to leave the discrimination of the source of variability to a further work, since this is the topic of a PhD thesis that started 2 years ago and the results will be submitted within this year. An initial analysis towards the role played by polar cyclones has been published by Hepworth et al (2022).Cited literature
Hepworth, E., Messori, G., Vichi, M., 2022. Association Between Extreme Atmospheric Anomalies Over Antarctic Sea Ice, Southern Ocean Polar Cyclones and Atmospheric Rivers. Journal of Geophysical Research: Atmospheres 127, e2021JD036121. https://doi.org/10.1029/2021JD036121
Horvat, C., 2021. Marginal ice zone fraction benchmarks sea ice and climate model skill. Nat Commun 12, 2221. https://doi.org/10.1038/s41467-021-22004-7
de Jager, W., Vichi, M., 2022. Rotational drift in Antarctic sea ice: pronounced cyclonic features and differences between data products. The Cryosphere 16, 925–940. https://doi.org/10.5194/tc-16-925-2022
Lam, H.M., Spreen, G., Heygster, G., Melsheimer, C., Young, N.W., 2018. Erroneous sea-ice concentration retrieval in the East Antarctic. Annals of Glaciology 59, 201–212. https://doi.org/10.1017/aog.2018.1
Meier, W.N., Peng, G., Scott, D.J., Savoie, M.H., 2014. Verification of a new NOAA/NSIDC passive microwave sea-ice concentration climate record. Polar Research. https://doi.org/10.3402/polar.v33.21004
Stroeve, J.C., Jenouvrier, S., Campbell, G.G., Barbraud, C., Delord, K., 2016. Mapping and assessing variability in the Antarctic marginal ice zone, pack ice and coastal polynyas in two sea ice algorithms with implications on breeding success of snow petrels. The Cryosphere 10, 1823–1843. https://doi.org/10.5194/tc-10-1823-2016
Womack, A., Vichi, M., Alberello, A., Toffoli, A., 2022. Atmospheric drivers of a winter-to-spring Lagrangian sea-ice drift in the Eastern Antarctic marginal ice zone. Journal of Glaciology 1–15. https://doi.org/10.1017/jog.2022.14
Tschudi, M.A., Meier, W.N., Stewart, J.S., 2020. An enhancement to sea ice motion and age products at the National Snow and Ice Data Center (NSIDC). The Cryosphere 14, 1519–1536. https://doi.org/10.5194/tc-14-1519-2020
Citation: https://doi.org/10.5194/tc-2021-307-AC1
-
AC1: 'Reply on RC1', Marcello Vichi, 30 Mar 2022
-
RC2: 'Comment on tc-2021-307', Anonymous Referee #2, 26 Jan 2022
This paper presents a new method to map the MIZ that was originally mapped by using the 15-85% of sea ice concentration (SIC) from passive microwave remote sensing data. Different algorithms in deriving the SIC would give very different MIZ (extent). But the new method that using the standard deviation of daily SIC anomalies (on monthly basis) gives consistent MIZ (extent) based on the SIC derived from different algorithms/datasets (Figure 5). Therefore, the paper concludes that this new method is a better method as compared with the 15-85% method, although without thorough evaluation to see if this is indeed the best MIZ (extent). I would think this is a new method and deserves further investigation and I encourage the author to do so. I would think the very first addition to confirm the potential effectiveness of the method is to apply this method to the Arctic sea ice. If the same conclusion is achieved, I would think it might be effective. Another way to evaluate the method is to compare the MIZ derived from high resolution imagery, especially for those areas and periods (for example, later spring/summer) with the highest disparity among the new method and existing methods.
Second, as indicated in the introduction, that SIC based MIZ identification is more reliable in the wintertime in southern oceans, I would agree your method seems achieve similar results (make sure this is correct), but for summer time, especially Nov, Dec, your results show too much high extent (Figure 5), similar or even larger, as compared with these from the 15-85% method that already said they are not accurate. Since overall, the Nov and Dec ice extents are smaller than the Sep/Oct, I would say the MIZ (extent) should be smaller than the Sep/Oct MIZ (extent). I know your statistic-based MIZ include those of the polynyas, not sure if these should be excluded? MIZ-like statistics can also found in the interior of the pack ice, should these zones also included as MIZ? In figure 6, your MIZ (yellow) for the December seems way to bigger and this makes me doubt your method for the later spring (Nov/Dec). maybe you need to use a larger threshold value for this period? Instead of 0.1, maybe 0.15 for this case? In Figure 7, the MIZ (extent) is larger than the SIE in five months, needing good explanation. To me the MIZ (extent) from the NOAA ORD data seems more reasonable (all smaller than the SIE) (Figure 7). In line 227-228, you mentioned “climatological MIZ extent shown in Fig 5 is an underestimation of sea ice area”, but then in line 232, you said that “MIZ extent presented in this work exceeds the total SIE”. Some confusions here needing explanation.
third, in the figure 5, I believe this is the 30-40 year averages, right? can you show a at least a sub-set of the those in each year? say 2008, 2009, 2010, 2011...; so make sure those differences also seen in yearly curves, not just an effect of average of 30 years or 40 years...
fourth, your taking of 0.1 for the σ value seems random, why not 0.12, 0.15, 0.17, or 0.2…? should this number the same for the Arctic sea ice?
Citation: https://doi.org/10.5194/tc-2021-307-RC2 -
AC2: 'Reply on RC2', Marcello Vichi, 30 Mar 2022
Note to the reviewers
I would like first to apologise for the delay in providing an initial response to the comments. This is attributed to unfortunate timing, since the reviews came back at the very start of the academic year in the Southern Hemisphere. I had to wait for the end of the first term to find the proper time for focusing and adequately addressing the critical comments and helpful suggestions.
In addition, I have been working on an update of the manuscript using the newly released version 4 of NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration ( https://doi.org/10.7265/efmz-2t65) . There has been a major change in this version, which is now released with the default application of temporal and spatial filters. Users would then not be able to reproduce the same results from the pre-print, and the version 3 dataset is not available any more.
The difference between the NSIDC CDR and OSI-450 that I highlighted in the pre-print is now much reduced, because both datasets are released with the application of various types of filters and interpolations. As indicated by reviewer 1 and further addressed in the specific answers, I am going to include in the revised version an analysis of the impact of the various filters on the diagnosis of SIC variability and the detection of MIZ characteristics.
The delay in this response was then additionally caused by my correspondence with the NSIDC data producers to make sure that it is possible to reproduce the version 3 results from version 4 (in addition to some small errors that I found in this new version that will likely be corrected in revision 4.1 of the dataset). This is still unresolved, but I am confident that it will be finalised by the time of submission of the revised version of this manuscript.Answers to Reviewer 2
I thank the Reviewer for the comments and suggestions. Together with Reviewer 1, they have given clear indications on how to improve the analysis and strengthen the use of this methodology. The comments from the reviewer are indicated in italics and my answers are in normal text.
RC2: I would think the very first addition to confirm the potential effectiveness of the method is to apply this method to the Arctic sea ice. If the same conclusion is achieved, I would think it might be effective. Another way to evaluate the method is to compare the MIZ derived from high resolution imagery, especially for those areas and periods (for example, later spring/summer) with the highest disparity among the new method and existing methods.
Answer: The literature on the MIZ definition in the Arctic using the operational threshold is much more extended in the Antarctic. This work addresses the limitations of the method when applied to Antarctic sea ice, but I acknowledge that I have not provided enough supporting literature to indicate the need for a new definition. This has been pointed out by Reviewer 1, and will be addressed in the revised version of the manuscript. The extent of the MIZ is more limited in Arctic sea ice, although recent literature is indicating that there are shortcomings in the simulation of the MIZ fraction in climate models (Horvat, 2021). I would argue that a complete analysis of this indicator in the Arctic would be beyond the scope of this work, but I agree that some indications would be useful. In the revised version I will add a discussion on the application of this method to the Arctic, showing a figure of the \sigma distribution that will help to better constrain the choice of the threshold (as requested in the last comment below).
I will dedicate a specific section on the comparison with existing observations and case studies, as also requested by the other reviewer. I agree that descriptive MIZ features can be obtained from literature, although these datasets are unlikely to be comprehensive enough in a spatial and temporal sense. I have selected a few examples that include direct floe size measurements, satellite imageries and reported sea-ice visual conditions and I will use them to assess the value of the indicator and the difference with respect to the operational MIZ classification. The assessment will be done considering the climatological intent of determining regions of higher variability that is at the basis of this approach. A comparison with instantaneous observations (e.g. SAR images), long-term drifters (Womack et al., 2022), and short-term cruises will need to be put into context, and this will be done accordingly in the revision. Furthermore, this indicator could be used to detect persistent artefacts in SIC retrieval (e.g. Lam et al., 2018, for an example of an erroneous persistent polynia in a region where such a feature is absent). Regions of known permanent sea-ice cover (as for instance regions of M-Y ice) may show unrealistically high variability which can be identified by using the proposed anomaly method.RC2: Second, as indicated in the introduction, that SIC based MIZ identification is more reliable in the wintertime in southern oceans, I would agree your method seems achieve similar results (make sure this is correct), but for summer time, especially Nov, Dec, your results show too much high extent (Figure 5), similar or even larger, as compared with these from the 15-85% method that already said they are not accurate. Since overall, the Nov and Dec ice extents are smaller than the Sep/Oct, I would say the MIZ (extent) should be smaller than the Sep/Oct MIZ (extent). I know your statistic-based MIZ include those of the polynyas, not sure if these should be excluded? MIZ-like statistics can also found in the interior of the pack ice, should these zones also included as MIZ?
Answer: The reviewer is correct that Fig.5 may suggest some misleading conclusions. I acknowledge some ambiguity in the use of this method to compare with the traditional SIE. This method is originally designed to diagnose a more appropriate monthly climatology that would improve the current operational definition. Specifically it is meant to address whether a region of the seasonally covered ocean is characterised by relatively high temporal variations in SIC, a metric that cannot be obtained by the 15-80% threshold. I do not mean to say that the threshold-based estimates are not accurate, but that there are regions of the ice-covered ocean that present physical characteristics similar to the MIZ even when the concentration is above 80%. This includes areas within the pack ice and areas of polynyas. My analysis is therefore more oriented towards the estimation of variability due to heterogeneous ice conditions, independently of where they are located. These regions are mostly along the margin with the open ocean, but not necessarily, especially in autumn and spring. I will also consider a change of title in the revised manuscript to indicate that the work first addresses the variability and how this can affect our definition of the MIZ.
RC2: In figure 6, your MIZ (yellow) for the December seems way to bigger and this makes me doubt your method for the later spring (Nov/Dec). maybe you need to use a larger threshold value for this period? Instead of 0.1, maybe 0.15 for this case? In Figure 7, the MIZ (extent) is larger than the SIE in five months, needing good explanation. To me the MIZ (extent) from the NOAA ORD data seems more reasonable (all smaller than the SIE) (Figure 7). In line 227-228, you mentioned “climatological MIZ extent shown in Fig 5 is an underestimation of sea ice area”, but then in line 232, you said that “MIZ extent presented in this work exceeds the total SIE”. Some confusions here needing explanation.
Answer: I apologise on the confusion. This apparent contradiction will be resolved in the revised manuscript, firstly by providing a more adequate explanation about the different ways of estimating the MIZ extent. SIE is a specific diagnostic, which is complementary to the estimation provided in this work. The most important pointi s how sensitive these diagnostics are to the underlying approximations. I have done further research on this, because another reviewer suggested possible biases of this indicator due to the degrees of post-processing of the raw brightness temperature data. The summer extent of the region characterised by high monthly variability is indeed modified by the inclusion of spatial and temporal interpolations. Using raw data, the December extent is further reduced (using both the SIC threshold and the \sigma indicators), which is important information. In the revised manuscript I will add a section that explores the impact of the various filters and I will better characterise the uncertainties using the total error of the algorithms. This analysis of the uncertainties was not included in the current manuscript and will be used to better constrain the seasonal cycle of the Antarctic MIZ.
RC2: third, in the figure 5, I believe this is the 30-40 year averages, right? can you show a at least a sub-set of the those in each year? say 2008, 2009, 2010, 2011...; so make sure those differences also seen in yearly curves, not just an effect of average of 30 years or 40 years...
Answer: yes, this is the climatology. In the revised manuscript I will present specific years as well as different regions, but the caveat indicated in the manuscript at lines 220-230 is still valid. There is a fundamental computational difference between the climatological averaging of the monthly extents shown in Fig. 5b, in which a monthly mask is multiplied by the pixel area then integrated and averaged, and the mask based on the climatological monthly standard deviation of the daily anomalies . This is because the average of the standard deviations computed from sub-samples of a population is different from the standard deviation of the whole population.
RC2: fourth, your taking of 0.1 for the σ value seems random, why not 0.12, 0.15, 0.17, or 0.2…? should this number the same for the Arctic sea ice?
Answer: this number is obtained from the analysis of the median distribution shown in Fig. 2b. The results are not sensitive to 20% variations around this value and this will be indicated in the revised manuscript. However the threshold depends on the filtering level applied to the raw data, as for instance shown in the current manuscript when comparing Fig 2b (from the unfiltered NSIDC CDR) with supplementary figure S2b (from the filtered OSI-450). Now that the NSIDC CDR Version 4 is released in the filtered mode by default, this difference is not visible anymore unless the user removes the temporal and spatial interpolations. A dedicated analysis will be added in the revised manuscript. I will also add the distribution for the Arctic and will discuss the implications.
Citation: https://doi.org/10.5194/tc-2021-307-AC2
-
AC2: 'Reply on RC2', Marcello Vichi, 30 Mar 2022