the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Brief communication: A roadmap towards credible projections of ice sheet contribution to sea level
Timothy C. Bartholomaus
Douglas J. Brinkerhoff
- Final revised paper (published on 17 Dec 2021)
- Preprint (discussion started on 21 Jun 2021)
CC1: 'Comment on tc-2021-175', Andrew Shepherd, 12 Jul 2021
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
RC1: 'Comment on tc-2021-175', Alexander Robel, 22 Jul 2021
This brief communication from Aschwanden et al. argues for a new approach to organizing future iterations of the Ice Sheet Model Intercomparison Project (ISMIP). ISMIP is a community-driven effort to collect and compare simulations from ice sheet modeling groups that are capable of simulating ice sheet evolution relevant to sea level rise from the recent past and the near future. The sixth iteration of ISMIP has recently concluded, with a series of papers in The Cryosphere reporting the outcome from this inter-comparison exercise (Goelzer et al. 2020, Seroussi et al. 2020). There are two main thrusts to the suggestions made in this manuscript:
1. ISMIP should make an effort at formal uncertainty quantification using standardized sets of parameter-perturbed ensembles
2. ISMIP should calibrate projections of future ice sheet change by comparison with observations of past ice sheet change.
Overall, I think this is a valuable manuscript with a message that should be considered by the ice sheet modeling community and particularly those who organize and participate in ISMIP (particularly as ISMIP7 begins to ramp up). There are places where it can be improved, and I have provided constructive suggestions in this regard below, organized into more significant conceptual issues and minor textual/technical issues.
1. It seems accurate to say that section 1 and figure 1 are meant to be the "problem statement" of this manuscript, drawing attention to the shortcomings of the projections from the ISMIP6 multi-model ensemble, with a particular focus on simulated cumulative mass loss from Greenland. While I agree with the general sentiment (and I think most ice sheet modelers would as well), I have some conceptual issues with how the argument is made here that prevent me from being 100% convinced:
(a) It is not obvious that a cumulative metric, such as the one that is used in Figure 1 is the correct one to make the point that there is a mismatch between models and observations. In particular, use of cumulative mass loss in the Figure makes it hard for me to assess whether models are consistently (through time) underestimating mass loss, or just at some point, leading to a persistent offset with respect to observations. The fact that the mismatch between models and observations doesn't appear to grow in time would indicate that the mismatch is mass loss rates is not consistent through time. It would perhaps be helpful to also provide a plot (or a second panel to this plot) showing instantaneous loss rates in simulations and observations to determine whether there are any times during the historical period where models are able to reproduce observed loss.
(b) Related to this issue, I think the following sentence (L45-46) is carrying a lot of the rhetorical weight of this section: "Underestimating recent mass loss likely translates into underestimating mass loss at 2100 as well." This is not obvious to me, and I'm not sure you've provided sufficient evidence here to support this statement. Particularly, it assumes that the sensitivity of the modeled ice sheet change to climate forcing will remain similar (or at leas the gap of this sensitivity between models and observations) between the recent past and the next century. Given that we know there are many aspects of ice sheet dynamics (SMB, MISI, etc) that lead to strongly nonlinear and changing sensitivities, this statement seems hard to support without evidence. I think it is fair to say that a model that can't reproduce the past is unlikely to be skillful in predicting the future, but speculating about the direction of this disagreement seems ill-advised, unless you have evidence to support.
(c) At various points throughout this section there is switching between referring to Greenland ISMIP projections, and all ISMIP projections. Yet, you have only shown this mismatch of cumulative loss for Greenland. Could you plot the same thing for Antarctica? Would it show the same mismatch? Given the recent manuscript by Slater et al. (2020) showing a better match from the Antarctic projections, my guess is that it would show that observations are tracking the high end of simulated Antarctic loss, but within the range of ISMIP6. Perhaps this does not make the exact point you are trying to get across here, but it would be a more accurate representation of the full ISMIP6 exercise, which included both ice sheets. Otherwise, focusing your discussion here on Greenland and discussing why the same mismatch might not be true in Antarctica would be a more comprehensive assessment of ISMIP6.
(d) It doesn't seem fair to compare observations to simulated mass losses where you have removed the unforced control simulation. It is argued in the Data Availability section that this "is intended to account for unforced model drift and mass loss committed as a result of non-equilibrium ice sheet conditions at the start of the simulations". However, the real Greenland ice sheet was probably not at equilibrium in the latter half of the 20th century, thus you are not making a fair comparison between the two, and potentially biasing to less mass loss over the simulation period. I would suggest not to remove the control simulation.
2. There has been considerable effort in recent years to improve UQ and Bayesian calibration best practices in ice sheet modeling, which hasn't been cited here, particularly: Schlegel et al. 2018, Bulthuis et al. 2018, Nias et al. 2019, Gilford et al. 2020, DeConto et al. 2021 (among others already cited including the study led by the lead author focused on Greenland). My takeaway from surveying this work is that the field is moving in the right direction, but not all groups have adopted the state-of-the-art practices in UQ and BC, with a large part of the reason being a lack of computational and financial/human resources, which are needed to do these sorts of resource-intensive ensembles. My suggestion would be to modify the message (perhaps softening it) to give credit where it is due (these uncited studies), and indicate how these "best practices" can be integrated into our-community wide intercomparison exercises (or perhaps have a completely separate inter-comparison exercise that is more UQ-focused). My hope would be that making these methods part of the standard intercomparison practice would spur a larger fraction of the community to be working on these problems, even outside ISMIP exercises.
3. While the mathematical formalism you adopt here is certainly nice and clean for separating and explaining the various types of model uncertainty, it is not clear that all sources of uncertainty can be separated so cleanly in reality. Particularly relevant to the suggested experimental design for ISMIP (of asking modeling groups to run ensemble simulations with a prescribed set of parameters) is the distinction between P(M) and P(k). Not all models have the same parameterizations. Not all models have the same numerical implementations of the same parameterizations. Some models have processes that no other model includes (e.g., MICI calving, temporally-varying subglacial friction). Thus, the structural model prior and the parameter prior are already convolved in many ways. Sometimes this can be controlled for (i.e. turning off the non-standard process), but sometimes it is built directly into the numerics of the model in a way that hinders simple inter-comparison across models. It would seem useful for the authors to suggest some ways that this issue can be addressed by the ISMIP organizers if this suggestion is to be taken up.
L23: attendant contribution to sea level exercise
L27: ice sheet change
L54-55: Confusing sentence
L62: Similar to point #3 above, since climate and ice sheet geometry feed back on each other are are P(F) and P(M) and P(k) separable?
L65-66: This point about deterministic dynamics is somewhat in conflict with your later points about aleatory uncertainty. We would need to have perfect observations of all initial and boundary conditions along with a complete set of equations to have a system with deterministic dynamics.
L79: is it actually a random sample?
L86: its probably worth it to mention InitMIP here
L89: why not?
L98: the fast and small-scale fracture processes
L99: captured in a large-scale model with long time steps
L107: simulation of Greenland Ice Sheet evolution
L112: It has been argued that some ice sheet processes are sufficiently complex and chaotic as to be considered part of aleatory uncertainty: calving, subglacial hydrology, etc.
L121: worth it to cite Hoffman et al. 2019 here too
L144: sentence starting "Depsite..." is confusing
L146: parametric uncertainty for the contribution of the Greenland Ice Sheet to sea level exercise
L161: joint omission of...
L165-166: observations is used twice in this sentence
L167-168: confusing sentence
L170: should cite Edwards et al. 2021 here
L173: ...uncertainties in intercomparison projects. [Again, the point being that such approaches have been used in the field, but not in ISMIP.]
L177: to sea level must include all types of uncertainty simultaneously...
L183: with many realizations of random climate...
L189: uncertainty produces a prior distribution...
L200: be a bit more clear - is the argument that modeling groups should not do any of their own calibration for simulations submitted to ISMIP (under the proposed plan)? Does inversion for initial conditions count as calibration? Moreover: should all the calibration be conducted centrally (i.e. after the fact by the ISMIP team using standardized tools)?
L209-210: I'm not sure multi-decadal observations of ice sheet change are able to rule out the possible effect of internally driven climate variability. 30 years is not a climatology for ice sheet changes which have intrinsic response time scales of many to decades to hundreds of years. Moreoever, one of the challenges of the limited observational record is that climate variability during the observational period may intriduce a bias during calibration (e.g. calibrating ice sheet models on the changes that have occurred at Sermeq Kujalleq/Jakobshavn over the past 30 years when they have also experienced a large internally driven warming event.)
L227: DeConto et al. 2021 and Gilford 2020 useful to cite here too
L230: the framing here is US-centric (particularly for an EGU journal)
L234-235: is there any way to include an estimate of how much of OPP budget actually went to modeling/SL projects (even just for 2019). This would be an incredibly useful number to have on the published record.
L240-241: I'm a bit confused by this. Is the issue that ISMIP is voluntary or that it isn't financially support at an appropriate level? I'm pretty sure the solution isn't to force modeling groups to participate (making it non-voluntary), but rather to incentivize groups to participate by making funds available to support the work and computation that require some of the changes you are suggesting.
L256: computing resources, will free ups scientists to continue conducting basic science reserch, while the global community benefits from needed advances in applied science (i.e. reliable sea level projections).
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
CC2: 'Comment on tc-2021-175', Sophie Nowicki, 03 Aug 2021
We post this comment as members of the scientific steering committee of the Ice Sheet Model Intercomparison Project for CMIP6 (ISMIP6): Sophie Nowicki, Antony Payne, Eric Larour, Heiko Goelzer, William Lipscomb, Helene Seroussi, Ayako Abe-Ouchi, Andrew Shepherd.
We welcome comments and suggestions for follow-on extensions for the ISMIP6 protocol. We are pleased to see these suggestions from some of the participants of the original ISMIP6 project.
1. The commentary in this manuscript is focussed on a single multi-model comparison (ISMIP6 Greenland). Many of the suggestions would similarly apply to other initiatives in the ice sheet modelling community (e.g., ISMIP6 Antarctica, ABUMIP, and LARMIP2), outside of it (e.g., GlacierMIP or even CMIP), and equally to many individual projections. While ISMIP6-Greenland can still be used as an example, we would appreciate clarification on why the authors did not undertake a broader approach (for example, using both ice sheets).
2. The framing of how ISMIP6 results have entered the AR6 report is problematic. The authors state: "This ISMIP6 distribution has since been adopted as the foundation for the IPCC AR6 consensus estimate of sea level contribution from ice sheets." As of now, the AR6 report has not been released, so it is not yet possible to say how the AR6 authors used ISMIP6 results. IPCC guidelines (Mastrandrea et al., 2010) on uncertainty do not support the use of a single line of evidence as the basis for an assessment, and therefore the AR6 consensus estimates will likely draw from many studies.
3. The ISMIP6 protocol and goals are not always accurately represented in this commentary. For example, participants were not limited to a “single” contribution, so anyone could have contributed to ISMIP6 a series of simulations to sample uncertainty in the way the authors describe. See specific comments below.
4. The authors generally do not clearly distinguish ice sheet model uncertainty from forcing uncertainty. Their proposed method of partitioning uncertainty relies on an oversimplification of the ISMIP6 protocol as it ignores complexities not present in, for example, Aschwanden et al. (2019).
5. Can the authors more clearly link Sect. 5 to the rest of the paper? This section seems to focus on a separate issue that might be relevant to a different audience. Taken in isolation, the ISMIP6 Greenland projections do not provide a strong basis for the commentary.
L. 3: "...few models reproduce historical mass loss accurately"
This is an important argument in the manuscript, but it requires a more nuanced treatment. Questions that need to be addressed are: Can and should the ISMIP6 projections be expected to reproduce observed mass changes? Would reproducing mass change over the relatively short observational period be a meaningful quality criterion for centennial time scale projections? We address these points in more detail below.
L. 6: “the future sea level contribution from Greenland may well be significantly higher than reported”
Little evidence is presented for this statement in the main text. If the claim is based on the possibly larger uncertainty, it could mean both higher and lower contributions. We suggest reformulation.
L. 8-11: “Finally, we note that tremendous government investment ...”
We agree on the need for more investment in ice-sheet and sea-level research. However, if the "significantly volunteer effort" refers to ISMIP6, the wording ("is founded on") may exaggerate the role of ISMIP6 results in government planning. See the comments above on multiple lines of evidence in IPCC assessments.
L. 20: “accurate”
To be “accurate” implies comparison to observation. Projections are of the future, so it is impossible to assess their accuracy now. We suggest a different phrasing.
L. 21: “defensible assessment of uncertainty”
The IPCC has robust guidelines to ensure that uncertainty is indeed assessed (e.g., Mastrandrea et al., 2010). As written, this sentence calls into doubt the IPCC methods of assessing the uncertainty. Please rephrase.
L. 23: “Ice Sheet Model Intercomparison for CMIP Phase 6 (ISMIP6)”
The correct name is “Ice Sheet Model Intercomparison Project for CMIP6 (ISMIP6)”.
L. 24: “(ISMIP6)”
References for ISMIP6 should include Nowicki et al. (2020) and Payne et al. (2021). Nowicki et al. (2020) contains the description of the experimental protocol which is mostly criticised here.
This figure is presented as a key line of evidence in Sect. 1, but important choices of selecting data, processing output, and combining results have not been motivated and described with sufficient detail. In particular:
* It is unclear how the uncertainty envelope has been derived. The original figure for the historical period (Fig. 4, Goelzer et al., 2020) does not attempt to show uncertainty in the model results, but simply reports the ensemble. Please clarify how the 90% credibility interval was derived.
* The uncertainty envelope of, e.g., IMBIE depends on the choice of assuming fully correlated or uncorrelated errors when accumulating uncertainties (compare again Fig. 4 in Goelzer et al., 2020). Can you motivate your choice for the narrower envelope and discuss the implications?
* Please discuss the conceptual difference between the historical experiments (until 2014) and the projections (2015+). While modellers were free in their simulation of the historical period, the projections were tightly constrained by CMIP model output. Combined analysis across those experiments (e.g., 2008–2020) is therefore difficult to interpret. See also the comment on L. 45 below.
L. 27-28: “ISMIP6 produced probabilistic distributions of projected sea level contribution”
ISMIP6 did not produce probabilistic results – it presented ensembles with no probabilities attached. Others (e.g., Edwards et al., 2021) have used these ensembles to make probabilistic assessments, but their analysis includes additional information and not simply the ISMIP6 results.
L. 31: “This ISMIP6 distribution has since been adopted as the foundation for the IPCC AR6 consensus estimate.”
What is the basis for this statement? See the general comment above.
L. 35: “Our skepticism regarding the ISMIP6 projections is based on the premise that accurate predictions of the cryosphere’s contribution to sea level require that models:
1. Fully characterize uncertainties in model structure, parameters, initial conditions, and boundary conditions.
2. Yield simulations that fit observations within observational uncertainty."
Although the requirements are laudable, it is nearly impossible for any study to achieve both, let alone a large multi-model project such as ISMIP6. To “fully characterize uncertainties” is demanding indeed, but this is not a problem within a single study, as long as other research addresses other uncertainties. It is also impossible for a model to fit all available observations within observational uncertainty, unless the model is overtuned. One can argue that particular observations are supremely important, but it is not obvious that recent mass loss is more important than, say, an accurate simulation of observed ice extent, thickness, and velocity.
Although this paragraph is set up to elaborate both requirements, the subsequent analysis of Fig. 1 is based only on the second requirement. See the comments above on the augmentation of ISMIP6 results.
L. 43: “Most simulations underestimate recent (2008–2020) mass loss.”
The period 2008–2020 straddles the ISMIP6 historical period (ending in 2014), during which modellers used the forcing of their choice, and the future period (2015+), when forcing was provided by climate models. Mass loss from 2015 reflects, in part, natural variability that would not be reproduced by the climate models.
Why focus on 2008–2020? Figure 1 (and Fig. 4 in Goelzer et al., 2020) start before 2008, so the statement applies also to a longer time period.
L. 45: “Underestimating recent mass loss likely translates into underestimating mass loss at 2100 as well.”
This is not necessarily true. As pointed out above, it is important to distinguish the historical period (before 2015) from the projections. Generally speaking, in order for ice sheet mass loss to be accurately simulated for the recent past (2008–2020), two things are required: The climate forcing should be accurate, and the ice sheet model should accurately represent the processes translating this forcing into mass loss.
Modellers were free to choose their own forcing for the historical period; in most cases, they used SMB output from regional atmosphere models such as RACMO and MAR. Some ice sheet models may have applied SMB forcing that was biased positive for the period 2008–2014. Also, most models did not apply forcing to outlet glaciers before 2015.
ISMIP6 climate forcing from 2015 onward was derived from the CMIP5 and CMIP6 Earth System Model (ESM) ensembles. A known complication of this forcing is that interannual variability (known to be important in determining Greenland’s mass budget) is seldom in phase with the observed climate. This significantly complicates a model–observation comparison over a short period of 12 years.
For these reasons, models that underestimated mass loss during 2008–2020 might have been responding realistically to biased forcing. To demonstrate that they underestimate mass loss at 2100, one would need to argue that (1) the ESM-derived SMB forcing through 2100 is biased positive, and/or (2) the models underestimate recent mass loss when forced with an accurate SMB and output glacier forcing.
L. 61: “In this short communication we will not address the issue of uncertainty in the forcing F (Team et al., 2010) but concentrate on the uncertainties arising solely from ice sheet models.”
There appears to be some confusion about this separation. In ISMIP6 (unlike Aschwanden et al., 2019), the SMB and outlet glacier forcing are prescribed and are not part of the ice sheet model formulation. Much of what can be assigned to parametric uncertainty in Aschwanden et al. (2019) has no equivalent as ice sheet model uncertainty in the ISMIP6 projections, but rather is connected to uncertainties in the forcing. Thus, the uncertainty framework described here does not apply in the same way to ISMIP6. Goelzer et al. (2020) state explicitly that we did not sample RCM uncertainty (which could loosely map to some of the uncertainties in PDD factors), but these complexities are not discussed here.
Since uncertainty in the forcing (SMB and outlet glaciers) could account for the issues highlighted by Fig. 1, it seems appropriate to address that uncertainty.
L. 102: “This lack of knowledge induces parametric uncertainty, for example, different values of thermal conductivity within firn might lead to different predictions of sea level contribution.”
This is true, but the thermal conductivity within firn is relevant only for the RCMs computing the SMB. So this example pertains to MAR and RACMO, but not the ensemble of ISMs.
L. 134: “(Slater et al., 2020)” , “(Barthel et al., 2020)”
Please cite two papers by Slater et al.: Slater et al. (2019) and Slater et al. (2020). The reference to Barthel et al. (2020) should be replaced by Nowicki et al. (2020), since only the latter paper shows atmospheric boundary conditions.
L. 134-137: “To allow a wide range of modeling groups to participate, concessions had to be made, resulting in an experimental setup that does not always reflect advances in modeling practices since the Sea-level Response to Ice Sheet Evolution (SeaRISE Bindschadler et al., 2013; Nowicki et al., 2013) project, including calving and frontal ablation. “
Please rephrase to better reflect the ISMIP6 standard and open protocols. The open protocol allowed groups to include calving and frontal ablation, and any other advances since SeaRISE.
L. 142-144: “Each group decided on the best parameter set for their simulations. This means that each model contributes a point estimate consisting of a single ‘best’ model run to the larger ensemble”
This statement is incorrect, since the protocol does not limit the number of simulations per group. Some groups submitted multiple runs with different physics options, resolutions, initial states, etc.
L. 146: “While it is difficult to gauge the magnitude of this underestimation, Aschwanden et al. (2019) suggest that the parametric uncertainty (inter-quartile range) at 2100 is 0.3 and 12.9 cm SLE for RCP 2.6 and 8.5, respectively, which is larger than the model uncertainty suggested by the ISMIP6 experiments (0.8 and 3.4 cm SLE, respectively).”
As pointed out above, some of the uncertainties in Aschwanden et al. (2019) would need to be mapped to forcing uncertainties in ISMIP6, in order to meaningfully compare the uncertainty in the two studies.
L. 148: “If one takes the Aschwanden et al. (2019) estimate of parametric uncertainty as reasonable, then the variance in ISMIP6’s predictive distribution is greatly underestimated with respect to the real variance”
There are reasons to think that the Aschwanden et al. (2019) estimate of parametric uncertainty may be too wide:
(1) The range of sampled PDD factors in their simplified melt model dominates their uncertainty. That range was motivated by measurements of PDD factors in the field. However, comparison with results of similar models in GrSMBMIP (https://doi.org/10.5194/tc-14-3935-2020) shows that PDD factors that produce reasonable simulations of the historical SMB are much lower than the upper range used in Aschwanden et al. (2019). Incidentally, the way the forcing is designed in Aschwanden et al. (2019) does not allow them to constrain their PDD factors based on their own historical simulations: a problem that prevents the second premise proposed in this commentary (requiring a projection system to reproduce historical behaviour) from being applied to those results.
(2) The Aschwanden et al. (2019) projections have a shortcoming in the forcing protocol, using spatially uniform temperature forcing that likely overestimates temperatures in the ablation zone. Taken together, these points suggest that results from Aschwanden et al. (2019) may be biased in both maximum contribution and uncertainty range and are not simple to interpret in relation to other estimates.
Please provide a stronger argument as to why the Aschwanden et al. (2019) projections should be considered as reasonable, bearing in mind the above comments.
L. 155: “initial ice sheet extent varied among models by up to 17%.”
How well does the 17% figure characterize the variance? Please comment on the entire distribution, not just the outliers.
ISMIP6 guidelines, as expressed in Goelzer et al. (2020), state that several metrics should be considered to evaluate a model’s representation of the initial state and possible biases (see, for example, Fig. 5 and its discussion in the text). This could be mentioned here as well.
L. 183-184: “with random climate and ocean forcings developed in collaboration with their respective modelling communities (cf. Robel et al., 2019)”.
Please clarify what is meant by “random climate and ocean forcings.”
Also, it could be noted that some ice sheet models cannot run large parameter ensembles, which would reduce the number of models able to participate.
L. 192-193: “To address both of these problems simultaneously, we advocate for conditioning ensemble predictions on relevant observations.”
Are there other examples in the literature that could illustrate this approach?
L. 247: “International governments directly support development, maintenance, and operation of the Earth System Models that serve as the foundation for CMIP6 (Eyring et al., 2016), and this financial support has contributed to a suite of models that now convincingly reproduce observed climate variability (Jones et al., 2013). It is time to similarly bring ice sheet modeling to an operational level and support it with the funding the problem deserves.”
Here, the authors might mention the work in progress to couple ISMs within ESMs.
L. 265: Data availability
Please acknowledge the use of ISMIP6 data, with text guidance available from http://www.climate-cryosphere.org/wiki/index.php?title=ISMIP6_Publication_List.
L. 270: “H. Goelzer, pers. comm., November 2020”
Please update to reference Goelzer et al. (2020). The data are the same as given and displayed in the paper, e.g. in Fig. 7.
Mastrandrea, M.D., C.B. Field, T.F. Stocker, O. Edenhofer, K.L. Ebi, D.J. Frame, H. Held, E. Kriegler, K.J. Mach, P.R. Matschoss, G.-K. Plattner, G.W. Yohe, and F.W. Zwiers, 2010: Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Intergovernmental Panel on Climate Change (IPCC). Available at <http://www.ipcc.ch>.
Nowicki, S., Payne, A. J., Goelzer, H., Seroussi, H., Lipscomb, W. H., Abe-Ouchi, A., Agosta, C., Alexander, P., Asay-Davis, X. S., Barthel, A., Bracegirdle, T. J., Cullather, R., Felikson, D., Fettweis, X., Gregory, J., Hatterman, T., Jourdain, N. C., Kuipers Munneke, P., Larour, E., Little, C. M., Morlinghem, M., Nias, I., Shepherd, A., Simon, E., Slater, D., Smith, R., Straneo, F., Trusel, L. D., van den Broeke, M. R., and van de Wal, R.: Experimental protocol for sealevel projections from ISMIP6 standalone ice sheet models, Cryosphere, 14, 2331–2368, https://doi.org/10.5194/tc-14-2331-2020, 2020.
Slater, D. A., Straneo, F., Felikson, D., Little, C. M., Goelzer, H., Fettweis, X., and Holte, J.: Estimating Greenland tidewater glacier retreat driven by submarine melting, Cryosphere, 13, 2489-2509, https://doi.org/10.5194/tc-13-2489-2019, 2019.
Slater, D. A., Felikson, D., Straneo, F., Goelzer, H., Little, C. M., Morlighem, M., Fettweis, X., and Nowicki, S.: Twenty-first century ocean forcing of the Greenland ice sheet for modelling of sea level contribution, Cryosphere, 14, 985-1008, https://doi.org/10.5194/tc-14-985-2020, 2020.
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
RC2: 'Comment on tc-2021-175', Nicolas Jourdain, 24 Aug 2021
Conflict of interest:
I was involved in the working group that designed the ISMIP6 ocean forcing for Antarctica (Jourdain et al. 2020) and I am one of the numerous co-authors of several ISMIP6 papers, although not those on Greenland. I am not part of the core team that framed and lead ISMIP6.
Summary and recommendation:
This paper is a comment on the relevance of using the Greenland ISMIP6 ensemble as a base for anticipation and mitigation of future sea level. The authors present four types of uncertainty (model, initial state, parameters, aleatoric) and explain that ISMIP6 does not account for most of them, and therefore likely underestimates the range of uncertainty on future sea level rise. They propose a path forward that consists of running more simulations to further sample uncertainty and conditioning ensemble predictions on observations. Then, they state that a volunteer effort such as ISMIP6 is under-resourced, e.g. compared to CMIP6. This short communication can be useful for the ISMIP community, but several aspects could be improved to make it more relevant (see details below). I therefore recommend this manuscript for publication after a major revision.
1- The authors are quite critical of the ISMIP6 design, but they should acknowledge that ISMIP is very new compared to CMIP which started in 1995. It is therefore normal that ice sheet projections are not as mature as ocean–atmosphere projections. I am sure that the ISMIP community is well aware of some of the mentioned limitations and this will hopefully be improved in the future MIPs. I nonetheless agree that the range provided by ISMIP6 or its statistical emulation should not be considered as a comprehensive estimate of uncertainty. But this is also true for the CMIP6 ensemble in which the parametric uncertainty is poorly or not evaluated.
2- It will be challenging to make ISMIP simulations match the observational trends as long as ISMIP is forced by or coupled to CMIP models because: (1) the CMIP groups build their preindustrial control simulation by running multi-centennial or multi-millennial simulations as close as possible to a steady state (although some models still drift, e.g. Sen Gupta et al. 2013). Then, they branch off their historical simulation(s) randomly from this preindustrial run and constrain the simulation with observed anthropogenic emissions since 1850. This approach does not seem compatible with ice-sheet simulations initiated from long paleoclimate spin up. (2) Although the authors claim that natural climate variability has little effect on the observed sea-level contribution (L.208-211), low-frequency natural variability may still affect the trend values and make it difficult to compare individual CMIP-based ice-sheet projections to observed trends (only one model ensemble member should match the observational time series to consider that the model is good, not all members).
3- I am not convinced by the relevance of the “aleatoric uncertainty” (L. 110-122). First of all, the chaotic nature of the atmosphere and ocean forcing is represented in individual CMIP6 simulations (this is the aim of the multiple ensemble members provided by many CMIP groups since at least CMIP3). The resulting low frequency (interannual to decadal) natural climate variability is therefore represented in the ISMIP6 forcing, although probably underestimated due to non-eddying ocean models (Penduff et al. 2018) but this is more a CMIP6 issue than an ISMIP6 issue. If the authors have in mind shorter natural variability (extreme weather or seasonal events), then MISI is probably not a good example as the ice sheet may not be sensitive to short ice-shelf basal melt variations (e.g. Favier et al. 2019), and a better example would be hydrofracturing (e.g. Robel and Banwell 2019). In this case, however, the uncertainty is probably more on the ability of ice-sheet and firn models to represent the entire hydrofracturing process than on the atmosphere forcing.
4- The paper is mostly about ISMIP6 Greenland projections although the title seems to point to ice sheets in general. The MISI is also quite often used as an example although it is mostly relevant for Antarctica. I think that most comments are also valid for ISMIP6 Antarctica, so I recommend balancing the paper between the model intercomparison for both ice sheets.
5- The discussion on the lack of substantial financial support assumes that ISMIP and CMIP are two distinct entities. However, several modelling groups involved in CMIP are currently working on the coupling of ice-sheet models into Earth System Models (e.g., UKESM, CESM, IPSL-CM, EC-EARTH), with accepted projection papers for some of these groups (i.e. potential contributors as soon as CMIP7). The introduction of ice sheet models into these ESMs might change the financial aspects for these ice-sheet modelling groups. This should be mentioned.
6- Given that there are comments on the use of ISMIP6 results in IPCC-AR6, the final version of the report (now publicly available) should be cited and described more accurately both for the likely range they provide (section 22.214.171.124 and Tab. 9.2 of IPCC-AR6) and for their attempt to estimate high-end projections (Box 9.4).
7- There should be more discussion on how to account for the deep uncertainty related to processes that are not represented, e.g., ice-sheet feedbacks to the climate system (e.g. Sadai et al. 2020), hydrofracturing and MICI (Lai et al. 2020; DeConto et al. 2021), shear margins (Lhermitte et al. 2020).
- The title is a bit misleading as the paper is only about Greenland.
- L. 24: a recent ISMIP6 reference that is more related to CMIP6 is Payne et al. (2021) https://doi.org/10.1029/2020GL091741
- L. 30-31, about “Implicit in this approach is the assumption that the ensemble of ice sheet models perfectly spans, without bias, the range of potential sea level contribution”: this is not specific to ice sheet models, it is also true for the CMIP6 ensemble, and it should be noted that a single regional atmosphere model (MAR) was used to calculate the surface mass balance and melt from the CMIP6 projections (Goezler et al. 2020).
- L. 31-32 about “This ISMIP6 distribution has since been adopted as the foundation for the IPCC AR6 consensus estimate of sea level contribution from ice sheets”: the AR6 (section 126.96.36.199) gives likely ranges based on the emulation of the ISMIP6 ensemble (Edwards et al. 2020), not based on the raw ISMIP6 distribution; furthermore, the IPCC corrects the estimates from Edwards et al. by adding the historical trend (see Table 9.2 of IPCC-AR6). There is also an entire box on the deep uncertainty and possible high-end projections (Box 9.4 of IPCC AR6) which is made for stakeholders with low risk tolerance.
- L. 75: instead of “poorly represented”, I would write “poorly or not represented”.
- L. 96-109: it may be worth mentioning that Hill et al. (2021, https://doi.org/10.5194/tc-2021-120) and Bulthuis et al. (2019) also investigated parametric uncertainty but for Antarctic projections. A limitation is that this is often done for a limited number of parameters and that it is difficult to define an acceptable range of parameter values.
- L. 140-151: assuming the parametric uncertainty provided by Aschwanden et al. (2019) is a reasonable estimate for all ISMIP6 models is probably a very strong assumption. First of all, some parameters used in Aschwanden et al. (2019) are related to atmosphere and ocean forcing method that significantly differ from ISMIP6 (with completely different parameters). Then the parametric uncertainty can be highly ice-sheet model dependent. Last, I guess that the methodology used to vary parameters while keeping the model trajectory consistent with paleo-climate proxies can be a matter of debates. Having said that, I agree that the parametric uncertainty was not explored in ISMIP6, and that it should be explored in future ISMIPs. This is also true for the majority, if not all, of the CMIP projections, and some groups are now using increasing computing power to quantify this uncertainty rather than increasing model resolution.
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
EC1: 'Editor assessment on tc-2021-175', Olaf Eisen, 25 Aug 2021
Dear Andy Aschwanden & co-authors,
thank you for submitting your manuscript to TCD, the discussion phase of which will end soon. We received two referee comments and two comments from the community, among the second one is from the ISMIP6 core team. It can be stated that your paper initiated a discussion in the community already before final publication, something which you probably intended.
All comments are very constructive and in some cases particularly detailed, especially those from the ISMIP6 core team, but all in favour of moving forward with this ms though requiring major revisions. "Major" in this case does more apply to the number of revisions/changes you should make rather than to some wrong assumptions. Despite that your manuscript was intially submitted as a brief communication, I ask you to prepare a careful revision and respond to all comments. Some issues are mentioned in various comments (e.g. the role of Antarctica, use of ISMIP6 in the AR6), so you can easily group your responses. Given the interest of the community in this topic I find it more relevant and adequate to clarify all issues in revision rather than to stick to the TC guidelines for "Brief Communications" regarding the number of pages.
With a big thank to the community for providing these comments and the two referees I'm looking forward to your revision.
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
AC2: 'Reply on EC1', Andy Aschwanden, 06 Oct 2021
The comment was uploaded in the form of a supplement: https://tc.copernicus.org/preprints/tc-2021-175/tc-2021-175-AC2-supplement.pdf
- AC1: 'Comment on tc-2021-175', Andy Aschwanden, 27 Aug 2021
Peer review completion
- Full-text XML
A few quick comments:
1/ Slater et al. have compared AR5 projections to IMBIE and they also found that the projections underestimated ice loss. Their conclusion was that the main issue was with SMB and not ice dynamics:
2/ Hofer et al. have shown that there is a significant difference between CMIP5 and CMIP6 temperature forcing, particularly in the Arctic, and that this leads to a signficant difference in modelled ice loss for Greenland. Their Fig. 3b shows a cumulative difference between Greenland ice loss under CMIP5 & CMIP6 of something like 1000-2000 Gt by 2020 for RCP8.5. Although this is an extreme scenario, their Fig 5 shows the difference is similar for other pathways. Hofer has suggested that ISMIP6 models may have been forced with CMIP5; as the difference is comparable to the bias shown in your Fig. 1 its probably worth checking this in detail.
3/ In your Fig. 1 you show IMBIE as well as 2 individual records of observations which are both included in IMBIE which are both at the upper range of ice losses among the ensemble included in IMBIE. For balance it might be a good idea to show the individual records that are at the lower range also. Or alternatively you could just show the IMBIE alone.
4/ We have updated the IMBIE assessment for AR6 and I am happy to provide those data should you need them