Research article 15 Mar 2022
Research article | 15 Mar 2022
Evaluation of Northern Hemisphere snow water equivalent in CMIP6 models during 1982–2014
Kerttu Kouki et al.
Download
- Final revised paper (published on 15 Mar 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 14 Jul 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on tc-2021-195', Anonymous Referee #1, 19 Aug 2021
Review of manuscript # tc-2021-195 entitled “Evaluation of Northern Hemisphere snow water equivalent in CMIP6 models with satellite-based SnowCCI data during 1982-2014”.
Summary
Kouki et al. compare climatological Northern Hemisphere snow water equivalent from a subset of CMIP6 models with observations from SnowCCI. The authors focus on two snow metrics: the 1982-2014 mean SWE during the month of February and the mean spring snow melt (from February to May). To better understand the drivers of climatological snow bias they seek to extract the influence of temperature and precipitation biases. The paper rather unsurprisingly finds that precipitation is the main driver of winter SWE biases while temperature and a residual term (meant to represent various other factors) are the main drivers of biases in spring snow melt. The study is of interest to The Cryosphere’s audience but requires revision before it can be considered for publication.
Major comments
Model selection: The decision to limit analysis to a subset of high resolution GCMs seems somewhat arbitrary and limits the paper's value. This decision should be better justified in the text. For example, the authors could show a comparison of winter SWE in high vs low resolution models as supplemental material. Otherwise, the authors should consider adding a few of the HighResMIP historical simulations (https://gmd.copernicus.org/articles/9/4185/2016/gmd-9-4185-2016.pdf) to their analysis so as to increase the ensemble size.
Interpretation of results: The authors point out discrepancies between models and observations but offer little commentary on what could be driving biases in specific GCMs. For example, they discuss a cold bias in the EC-Earth models as unique to the ensemble but fail to connect this to the fact that EC-Earth is the largest outlier in terms of snow cover extent among CMIP6 models (Mudryk et al. 2020). More insight could also be added when discussing the CESM models, which feature anomalous winter SWE.
Readability: There are also several notations used throughout which can be improved to help the reader. For example, the “model-minus-observations difference” can simply be referred to as model bias. The results section can also be better tied together. Most paragraphs in Section 3 start with “Figure __ shows ...”, which becomes very repetitive and causes the paper to lack flow.
Minor comments
L13-14 and throughout: change “SWE change rate in spring” to “spring SWE loss” or similar since the February to May SWE should decrease everywhere.
L16: I don’t understand what point is being made hre: “Even too cold temperatures cannot cause too high SWE without precipitation”.
L47: State that this is largely because of the increased atmospheric moisture holding capacity.
L48: “Trends in seasonal snow also vary seasonally” awkward wording.
L48-49: State why spring snow is especially sensitive to warming (e.g., surface albedo feedback is strongest during spring).
L50: Clarify what is meant by "early-winter"?
L70: Change “the difference” to “the model bias”
L72-73: They stated that analysis is needed to understand SWE trends, but this paper only looks at climatological values.
L88-89: Could be worth showing this for one GCM in the supplement. E.g. a version of Figure 2 where the grey lines represent internal variability rather than intermodel variability.
Table 1: Add model resolution as a column since that is one of the requirements for this study.
L109: Remove “year”
L109-110: Awkward wording, rephrase: “cover non-mountainous regions, and glaciers and ice sheets are excluded.”
L119: Sun et al 2018 (doi: 10.1002/2017RG000574) is a good reference for this statement.
L120: Why not convert it to mm/month so they are directly comparable?
L125: Citation needed for this statement.
L133: Is there any downside to comparing the models at the observational resolution rather than regridding the observations to match the GCMs?
L138: Is this snow covered area calculated for each GCM or is a common snow covered area used across all models? We know from Mudryk et al. (2020) that snow cover extent is highly variable across CMIP6 models.
L144: Shouldn’t February be included in this as well since you are assessing the February mean rather than Feb 1 SWE?
L159 and throughout: change “model-minus-observation difference” to “model bias”.
L188: The precipitation and temperature biases seem fairly important to the overall story so it might be worth promoting this material to the main text.
Fig 3: “Mean difference in SWE” should be referred to as “SWE Bias” throughout
Fig 3-4: Slightly confusing how “SWE in winter” refers to February, but “Mean P in winter” refers to the Nov-Jan mean.
L221-222: Can you quantify this bias in terms of a percent of the climatology?
L225: “Overall, the GFDL models are the most consistent with the SnowCCI data” -- add “during February” after this statement.
L230: add “NH extratropical” between overestimate and precipitation.
L231: remove “dotted”
L237: reword “either too high SWE and too low T or too low SWE and too high T”
L251-252: “whereas in other models, deltaSWE is clearly smaller.” This is not the most meaningful insight, can you be more detailed.
L277: Is it realistic to treat T and P as independent variables?
L280-284: Hypothesize what is unique about these models that could be driving this.
Prior to Figure 8: it seems like there should be a figure showing spring SWE change from OBS and models before showing the biases.
L294: DeltaSWEchange is confusing notation. Consider alternatives such as DeltaSWEmelt?
L298-299 and elsewhere: change “melts more slowly” to “there is less snowmelt”. What is shown does not necessarily mean snow is melting faster because they all have different SWEmax values.
L316: change “mutually biases” to “mutual biases”
L327-348: Discussion of EC-Earth biases could mention that these models drastically overestimate NH snow cover extent.
L337: “biases in snow melt rate in spring are dominated by other factors than T or P” – further discuss some possible factors in the text (e.g. snow-covered surface albedo biases, which have been documented by numerous studies, albedo feedback strength).
- AC1: 'Reply on RC1', Kerttu Kouki, 11 Oct 2021
-
RC2: 'Comments on Kouki et al. (tc-2021-195)', Anonymous Referee #2, 23 Aug 2021
General comments
In this work, Kouki et al. use the SnowCCI data (passive microwave + point-based snow depth measurements) to evaluate CMIP6 model-based SWE products and to address dominant factors causing SWE discrepancies using a linear regression approach. The study presents results quantifying the discrepancies between CMIP6 and SnowCCI SWE and the relative contributions of precipitation and temperature to the differences for winter and spring seasons, respectively. The paper is generally written well, and the presentation quality of the figures is great. However, the current manuscript needs to be expanded upon before publication is warranted. Major concerns are given below. I am going to recommend that this paper be returned for major revisions and specifically the inclusions of more extensive literature reviews, additional analysis, and reorganizing the structure of this study for the Cryosphere community.
Major comments
- Introduction
The current section has extremely limited information about the previous studies for climate model-driven snow products in the Introduction section (such as the general performance of SWE products from earth system models within the CMIP, and what are the previous findings of the differences in CMIP6 as compared to CMIP5 snow products, etc). I would strongly recommend including a further description about climate model-driven snow products and comparison studies (CMIP5 & 6, and statistical or physically downscaled products e.g. CORDEX) with its reliability and uncertainties in Introduction section. Also, the authors should provide much more sufficient information about a recent progress of the SnowCCI products from Luojus et al., (2021) and Pulliainen et al. (2020) [this manuscript should provide that information as a standalone work]. I’m sure this will draw potential readers’ attention to the necessity of this study.
- Non-mountainous regions
The authors clearly stated that a main differentiation of the current study from one previous study comparing SWE in CMIP6 models (Mudryk et al., 2020) is to consider both temperature and precipitation to explain the differences in SWE. However, I would note that, unlike Mudryk et al. (2020), this study was conducted only for non-mountainous regions because of the unavailability of the SnowCCI SWE product over complex topography. This is crucial for SWE because a large portion of the seasonal snow exists mountains (for example, 40 to 60% in North America; Wrzesien et al., 2018; Kim et al., 2021). To achieve the comprehensive results across the NH, thus, I strongly suggest that the authors would consider adapting the weight-based blending approach used in Mudryk et al. (2020) with one or more additional reliable SWE products to include mountainous regions in this study. They used this approach to overcome the unavailability of the Globsnow SWE in complex terrains. The approach allowed them to merge multiple observations and reanalysis products to be able to evaluate CMIP6 SWE over the entire NH domain (not just non-mountainous areas). As the authors may know, the method is that a weight given to the GlobSnow data is linearly reduced with increasing the fraction of mountainous terrain, reaching zero for grid cells containing only mountainous terrain. Regarding dominant portions of the seasonal snow in NH exist in mountain regions, this will surely strengthen the results. Otherwise, it should be clearly stated that this study focuses on non-mountainous regions.
- Wrzesien, M. L., Durand, M. T., Pavelsky, T. M., Kapnick, S. B., Zhang, Y., Guo, J., and Shum, C. K.: A new estimate of North American mountain snow accumulation from regional climate model simulations, Geophys. Res. Lett., 45, 1423–1432, 2018.
- Kim, R. S., Kumar, S., Vuyovich, C., Houser, P., Lundquist, J., Mudryk, L., Durand, M., Barros, A., Kim, E. J., Forman, B. A., Gutmann, E. D., Wrzesien, M. L., Garnaud, C., Sandells, M., Marshall, H.-P., Cristea, N., Pflug, J. M., Johnston, J., Cao, Y., Mocko, D., and Wang, S.: Snow Ensemble Uncertainty Project (SEUP): quantification of snow water equivalent uncertainty across North America via ensemble land surface modeling, The Cryosphere, 15, 771–791, https://doi.org/10.5194/tc-15-771-2021, 2021.
- Mudryk, L., Santolaria-Otín, M., Krinner, G., Ménégoz, M., Derksen, C., Brutel-Vuilmet, C., ... & Essery, R. (2020). Historical Northern Hemisphere snow cover trends and projected changes in the CMIP6 multi-model ensemble. The Cryosphere, 14(7), 2495-2514.
- Forested areas
I am not fully sure about the reliability of the SnowCCI product is enough as a single reference dataset to evaluate the CMIP6 SWE product to achieve a general conclusion, particularly in not only mountainous areas (which were already masked), but also vegetated (or forested) areas in this study. There are well-known limitations of satellite-based passive microwave (PMW) sensors for snow remote sensing which have been used to develop the GlobSnow product as the main component. Numerous previous studies have found that the passive microwave SWE products are problematic due to many issues (e.g. deep snow “saturation effect”, wet snow, forest canopy, terrain heterogeneity, etc.; Dong et al., 2005; Derksen et al., 2010). I believe many readers may also concern about the issues regarding the reliability of the SnowCCI product, particularly in snow hydrology community (Larue et al., 2017).
To address the issue of the product in forested areas, ideally, employing a model/reanalysis SWE product could mitigate it (such as MERRA2 or ERA5; Colleen et al., 2019). Also, it might be helpful to discuss about recent findings in the Introduction or Discussion sections. For example, a recent study from an independent group found that there were better performances of the GlobSnow SWE product as compared to the passive microwave alone SWE retrievals, particularly in maritime and warm forest environments (Cho et al., 2020; this study used the previous version; GlobSnow v2). I strongly recommend providing clear descriptions how (not) to deal with the issues with sufficient literatures.
- Dong, J.P. Walker, P.R. Houser, Factors affecting remotely sensed snow water equivalent uncertainty, Remote Sens. Environ., 97 (1) (2005), pp. 68-82
- Derksen, P. Toose, A. Rees, L. Wang, M. English, A. Walker, M. Sturm Development of a tundra-specific snow water equivalent retrieval algorithm for satellite passive microwave data, Remote Sens. Environ., 114 (8) (2010), pp. 1699-1709
- Larue, F., Royer, A., De Sève, D., Langlois, A., Roy, A., & Brucker, L. (2017). Validation of GlobSnow-2 snow water equivalent over Eastern Canada. Remote sensing of environment, 194, 264-277.
- Cho, E., Jacobs, J. M., & Vuyovich, C. M. (2020). The value of longâterm (40 years) airborne gamma radiation SWE record for evaluating three observationâbased gridded SWE data sets by seasonal snow and land cover classifications. Water resources research, 56(1).
- Reorganization of the structure of the manuscript
I think the current manuscript should be re-organized. There exist many statements in Discussion section which are supposed to be in "Result" section (or already mentioned here). There is a limited discussion in the current manuscript which should have been here such as "comparison to previous findings and why they are similar/different", "Limitations in the methods and results", and "future perspectives". To make a more structured manuscript, I would recommend separating Data and Method and making subsections within “Data” section such as “SnowCCI”, “MERRA-2 temperature”, “GPCC precipitation”, and “CMIP6”. Also for “Discussion” section, I suggest separating the current form into subsections based on the major findings such as “CMIP6 performance”, “Relative contribution of P and T to SWE”, and “Limitations and future perspectives”, something like them. This would help readers explicitly find and understand this work.
- The residual term
There are many parts that just speculated the reasons of the residual term without supporting explanation based on previous findings or sensitivity analysis (e.g. L254-255, L413-414), even though the portion of the term was considerable. (1) Please provide reasonable rationales to support the author’s statements. Regarding this, I think land characteristics such as forest fraction and/or spatial heterogeneity also can impact on generating the residual. To examine this, (2) I would suggest that the authors conduct some sensitivity analysis to provide useful information to be able to explain regional differences in residual from Figures 7 and 12.
Specific comments
L13 Specify in-situ “snow depth”
L54 Even though a satellite remote sensing technique is the only option for “observing” SWE at continental scale, state-of-the-art model/reanalysis SWE products have been successfully estimated, and they have been widely used for hydrological and climate research rather than satellite-based approach (mostly passive microwave) probably due to its limitations above. I would suggest rewriting this part covering not only remote sensing approach but also model/reanalysis products for NH SWE.
- Huning, L. S., & AghaKouchak, A. (2020). Global snow drought hot spots and characteristics. Proceedings of the National Academy of Sciences, 117(33), 19753-19759.
L69 They have used four model/reanalysis and satellite SWE datasets and combined them using a blend approach, not just satellite-based data.
L87 – 89 I think presenting the results from the brief analysis (even in supplementary info) should be helpful for keen reader. Also please provide the detailed description of how the difference among the ensemble members are quantitively smaller than that of models.
L91-92 Is the GlobSnow v3.0 the same product as SnowCCI used in this study? If not, please add the differences.
L100 – 102 Even though the GlobSnow retrieval was improved by combining in-situ snow depth observations as compared to a satellite-only retrieval SWE, there was still large uncertainties for moderate and deep SWE range (about > 150 mm) which is probably due to the “saturation effect” of the volume scattering approach (Derksen et al., 2010; Cho et al., 2020). Was the SnowCCI improve these limitations as compared to the previous version of the GlobSnow? Based on the SWE assessment in Luojus et al. (2021), the overall RMSE for all samples and for shallow to moderate snow conditions only (SWE below 150 mm) is 52.6 mm and 32.7 mm, respectively.
L109-110 What percentage of the seasonal snow-covered area is non-mountainous area over NH? It would be helpful for reader to get the conclusion from this study within non-mountainous areas (if the authors adhere to non-mountainous area).
L112-113, L361-362 Overall, I felt that the paper is overvaluing the accuracy of the SnowCCI product as reference dataset. Please tone down.
L254 What does “model structural factors” mean? Be specific.
L254-255 This is speculation for me. Please provide rationale based on literatures related to this statement.
L259 I do not think R^2 is a “parameter” of linear regression.
Figure 11 To me, the residual terms overwhelmed the contribution of P and T. In this case, are the contributions of P and T still statistically significant?
L337 Please add further discussion “other factors” particularly in spring season. Do you think mismatching of the spatial resolution among the data sets can be one of the reasons? If so, please add some discussion about this. Regarding this, how do you think of the resampling method (nearest neighbor)?
Figure S6 There are areas where the R^2 values are extremely low. I think it would be good to show the beta_P and beta_T for regions only where there are statistically significant. Please consider applying this throughout all figures.
L342 Be consistent either “Fig” or “Figure”
L360-361 This sentence is redundant as the authors already mentioned. I would suggest rephrasing something like “while …, our study focuses on analyzing the CMIP6 SWE responses to both temperature and precipitation”
L362-364 I am not sure if the statements are needed here, which were already mentioned several times.
L373 Figs.
L360-364 & 376-380 To me, it seems like the summary, not discussion. I would strongly recommend using here for the detailed discussion, such as what are similar/different and what are new findings from this study as compared to previous studies?
L388 Figs. If you refer more than two figures, please use Fig“s”
L430 I suggest providing much more details of the limitations/uncertainties from the SnowCCI and others to provide sufficient information for those who would use the data sets for their own research, particularly for the issues that I provided in the major comment (such as uncertainties in forested areas which have been challenging areas in snow community). What would the authors expect potential uncertainties in GPCC? Please add discussion sufficiently.
- AC2: 'Reply on RC2', Kerttu Kouki, 11 Oct 2021
Peer review completion



