Snow accumulation over the world's glaciers (1981–2021) inferred from climate reanalyses and machine learning
- 1Department of Geosciences, University of Fribourg, Fribourg, Switzerland
- 2Laboratory of Hydraulics, Hydrology and Glaciology (VAW), ETH Zurich, Zurich, Switzerland
- 3Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), Birmensdorf, Switzerland
- 4Federal Office of Meteorology and Climatology MeteoSwiss, Locarno-Monti, Switzerland
- 5WSL Institute for Snow and Avalanche Research SLF, Davos, Switzerland
- 6Climate Change, Extremes and Natural Hazards in Alpine Regions Research Center CERC, Davos, Switzerland
- 1Department of Geosciences, University of Fribourg, Fribourg, Switzerland
- 2Laboratory of Hydraulics, Hydrology and Glaciology (VAW), ETH Zurich, Zurich, Switzerland
- 3Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), Birmensdorf, Switzerland
- 4Federal Office of Meteorology and Climatology MeteoSwiss, Locarno-Monti, Switzerland
- 5WSL Institute for Snow and Avalanche Research SLF, Davos, Switzerland
- 6Climate Change, Extremes and Natural Hazards in Alpine Regions Research Center CERC, Davos, Switzerland
Abstract. Although reanalysis products for remote high-mountain regions provide estimates of snow precipitation, this data is inherently uncertain and assessing a potential bias is difficult due to the scarcity of observations, thus also limiting their reliability to evaluate long-term effects of climate change. Here, we compare the winter mass balance of 95 glaciers distributed over the Alps, Western Canada, Central Asia and Scandinavia, with the total precipitation from the ERA-5 and the MERRA-2 reanalysis products during the snow accumulation seasons from 1981 until today. We propose a machine learning model to adjust the precipitation of reanalysis products to the elevation of the glaciers, thus deriving snow water equivalent (SWE) estimates over glaciers uncovered by ground observations and/or filling observational gaps. We use a gradient boosting regressor (GBR), which combines several meteorological variables from the reanalyses (e.g. air temperature, relative humidity) with topographical parameters. These GBR-derived estimates are evaluated against the winter mass balance data by means of a leave-one-glacier-out cross-validation (site-independent GBR) and a leave-one-season-out cross-validation (season-independent GBR). Both site-independent and season-independent GBRs allowed reducing (increasing) the bias (correlation) between the precipitation of the original reanalyses and the winter mass balance data of the glaciers. Finally, the GBR models are used to derive SWE trends on glaciers between 1981 and 2021. The resulting trends are more pronounced than those obtained from the total precipitation of the original reanalyses. On a regional scale, significant 41-year SWE trends over glaciers are observed in the Alps (MERRA-2 season-independent GBR: +0.4 %/year) and in Western Canada (ERA-5 season-independent GBR: +0.2 %/year), while significant positive/negative trends are observed in all the regions for single glaciers or specific elevations. Negative (positive) SWE trends are typically observed at lower (higher) elevations, where the impact of rising temperatures is more (less) dominant.
- Preprint
(1830 KB) -
Supplement
(106 KB) - BibTeX
- EndNote
Matteo Guidicelli et al.
Status: final response (author comments only)
-
RC1: 'Comment on tc-2022-69', Anonymous Referee #1, 01 Jun 2022
In "Snow accumulation over the world’s glaciers (1981-2021) inferred from climate reanalyses and machine learning", a machine learning model is applied to 95 glaciers on 3 continents to downscale precipitation and other variables from commonly used reanalysis products.
The problems begin with the title, which overstates its importance. Only a tiny fraction, in fewer than half of the continents, of the world's glaciers are examined. The manuscript has too many figures and tables. The manuscript is supposed to be within 12 journal pages for TCD. The tables and figures alone, most of which occupy a full page, would take up this much space. The figures are bloated. For example, there is no need to illustrate "Tree 1" nor "Tree N", both of which are identical in Figure 3. The PCA section (4.1) doesn't tell the reader much more than the fact that elevation is the most important downscaling predictor. The leave one out validation is problematic as there is no independent validation dataset used, meaning that biases in precipitation are unlikely to be identified.
ERA-5 and MERRA-2 reanalyses are used without any mention of their potential large biases in the mountains. For example, Liu and Margulis (2019) report that MERRA-2 underestimates snowfall (which is based on the "PRECTTOLAND" variable used here) by 54% in High Mountain Asia. It's not clear to me that the downscaling techniques presented here will correct that bias, as no independent evaluation of precipitation is presented. Melt and sublimation are ignored in the "winter mass balance," which is then the wrong term.
After carefully searching through the text, I still cannot understand how precipitation phase was treated. It seems to have been ignored as SWE is used interchangeably with the downscaled precipitation on glaciers. But then, in Table B1 and B2 ERA-5/MERRA-2 snowfall variables are listed as predictors?
Because of its excessive length, lack of clarity, and questionable assumptions, I recommend this manuscript be rejected. For a resubmission, I suggest the authors consider an independent evaluation of snow accumulation and at least an explanation of how precipitation phase was treated. The size of the figures and tables needs to be cut approximately in half.
Works cited
Liu, Y., and Margulis, S. A.: Deriving Bias and Uncertainty in MERRA-2 Snowfall Precipitation Over High Mountain Asia, Frontiers in Earth Science, 7, 10.3389/feart.2019.00280, 2019.
-
AC1: 'Reply on RC1', Matteo Guidicelli, 25 Jul 2022
We would like to acknowledge the reviewer for this thorough and critical review that has helped us to sharpen the focus of our study.
In the following, we report our responses (bold) to the reviewer's concerns (within quotation marks).
“The problems begin with the title, which overstates its importance. Only a tiny fraction, in fewer than half of the continents, of the world's glaciers are examined. The manuscript has too many figures and tables. The manuscript is supposed to be within 12 journal pages for TCD. The tables and figures alone, most of which occupy a full page, would take up this much space. The figures are bloated. For example, there is no need to illustrate "Tree 1" nor "Tree N", both of which are identical in Figure 3. The PCA section (4.1) doesn't tell the reader much more than the fact that elevation is the most important downscaling predictor.”:
Title: We agree that the term “world’s glaciers” can be misleading. In response to this comment we will change the title to: “Snow accumulation over glaciers in the Alps, Scandinavia, Central Asia and Western Canada (1981-2020) inferred from climate reanalysis and machine learning”
Number of figures and tables: We agree that some simplification is beneficial to the paper and we will accordingly perform major changes including a reduction of the number of Figures / Tables, as well as their content wherever possible and briefly described in the following:
Tables 2 and 3 will be moved to the Supplementary material. Fig. 2 will be moved to the Supplementary material as well. Fig. 5 could also be moved to the Supplementary material; even though it shows that other predictors than elevation are important to explain different biases between reanalysis’ precipitation and snow accumulation on glaciers.
We also agree that Sec. 4.1 needs to be modified in order to better quantify the added value of each group of predictors on the model’s performance. In the revised version of the paper we will show the changes in terms of overall model performance when suppressing the downscaled predictors (and/or other predictors (e.g. topographical)). In fact, this might be a better evaluation of the predictors’ importance than only showing the frequency of use of the main predictors (Fig. 4a and b) and their correlations (Fig. 4c and d).
Fig. 3 will be simplified and replaced by a smaller figure without the illustration of the “Trees”.
“The leave one out validation is problematic as there is no independent validation dataset used, meaning that biases in precipitation are unlikely to be identified.”:
Many thanks for this thought. However, we do not fully agree with this statement. For the “site-independent GBR”, the model is always validated on a glacier that is independent from the model’s training. Thus, as stated in the manuscript, the leave-one glacier-out cross-validation allows evaluating the generalization of the machine learning models for glaciers located in the same regions of the training data. Fig. 9 shows a more robust validation, where the performance of the machine learning models is also evaluated for completely independent regions (removing neighboring glaciers from the training data). Biases of reanalysis’s precipitation against snow accumulation data (based on ground measurements and extrapolation techniques (see Sec. 2.2)) on the glaciers of the study are therefore identified (see Figs. 6 and 7).
Despite the glaciers used for validation being independent from the GBR model’s training, it is true that they have an influence on the choice of the optimal hyperparameters of the GBR model, i.e.: the GBR model was optimized to perform well on the validation data. However, each single glacier (1 out of 95 glaciers) used for the validation has a very limited weight on the overall performance (mean squared error) and on the choice of the GBR’s hyperparameters.
In order to make the proposed method even more robust, we will also define the hyperparameters independently from the test sites, i.e.: in turn, each glacier will be used to test the GBR model trained and validated (k-fold cross-validation for the selection of the hyperparameters) with the other glaciers.
“ERA-5 and MERRA-2 reanalyses are used without any mention of their potential large biases in the mountains. For example, Liu and Margulis (2019) report that MERRA-2 underestimates snowfall (which is based on the "PRECTTOLAND" variable used here) by 54% in High Mountain Asia.”:
We are fully aware of the limitations of Reanalyses (because of missing and/or highly inaccurate in-situ observations) in high mountain region and specifically precipitation. In fact, our whole study is in principle motivated by this major challenge of improving the quantification of high altitude (solid) precipitation and SWE. In the current manuscript. Reanalysis biases in high-mountain regions are thus clearly mentioned including references in the introduction (lines 60-67). However, we agree that the biases observed in previous studies have not been described and quantified abundantly enough. In the revised paper we will better include them in the introduction thus enhancing the comprehensiveness of the manuscript. We will also add respective reference in the revised manuscript (e.g. Nitu et al, (2018), Zandler et al. (2019)).
References:
- Nitu, R., Roulet, Y. A., Wolff, M., Earle, M., Reverdin, A., Smith, C., ... & Yamashita, K.: WMO Solid Precipitation Intercomparison Experiment (SPICE). Tech. Rep., World Meteorological Organization, 2018. a., http://hdl.handle.net/20.500.11765/10839
- Zandler, H., Haag, I., and Samimi, C.: Evaluation needs and temporal performance differences of gridded precipitation products in peripheral mountain regions, Scientific Reports, 9, 15 118, https://doi.org/10.1038/s41598-019-51666-z, 2019.
“It's not clear to me that the downscaling techniques presented here will correct that bias, as no independent evaluation of precipitation is presented.”:
Reanalysis’s precipitation is compared against snow accumulation data on glaciers. This data clearly is independent, and it is to our knowledge the only and thus best possible source of (cumulative) precipitation at very high elevation. The machine learning model is trained and validated against these snow accumulation data on glaciers. In general, from the results presented in the manuscript (e.g. Figs. 6 and 7) it is clear that, on average, the machine learning models can adjust the reanalysis’ bias against snow accumulation on glaciers, which is among the main purposes of the study.
“Melt and sublimation are ignored in the "winter mass balance," which is then the wrong term.”:
We do not fully agree with the reviewer here. The term “winter mass balance” refers to the snow water equivalent found on the glacier close to the maximum of snow depth, or the end of winter. Therefore, the winter mass balance – per definition – includes loss terms such as melt and sublimation, although they are not individually quantified. Furthermore, our periods of analysis are adjusted to optimally match the period where the components of melt and sublimation are small in comparison to accumulation by solid precipitation.
“After carefully searching through the text, I still cannot understand how precipitation phase was treated. It seems to have been ignored as SWE is used interchangeably with the downscaled precipitation on glaciers. But then, in Table B1 and B2 ERA-5/MERRA-2 snowfall variables are listed as predictors?”:
Indeed, the precipitation phase was ignored. In the revised paper we will more clearly describe this choice, and also why we think that this simplification is justified, as briefly summariezed in the following.
We adjusted the total precipitation variable of the reanalysis (“tp” for ERA-5 and “PRECTOTLND” for MERRA-2 (see Sec. 2.1.1 and Sec. 2.1.2)). We are aware that a different adjustment factor of precipitation might be needed depending on the precipitation phase. However, as we only adjust the total precipitation occurred during the accumulation season, the adjustment factors represent the “average” adjustment factor of all precipitation events.The snowfall variable was used as a predictor in order to give the chance to the GBR model to learn that a different “average” adjustment factor should be applied depending on the proportion of snowfall against total precipitation (i.e. depending on the main precipitation phase during the accumulation season).
-
AC1: 'Reply on RC1', Matteo Guidicelli, 25 Jul 2022
-
RC2: 'Comment on tc-2022-69', Anonymous Referee #2, 18 Jun 2022
Guidicelli et al propose an interesting method to downscale and bias-correct reanalysis precipitation data to the elevation and sites of glaciers in 4 regions of the world. 2 reanalyses are used : ERA5 and MERRA2. The method is based on gradient boosting regressions, a technique from the field of artificial intelligence. The performance of this method is evaluated through cross-validation and discussed in terms of both temporal and spatial extrapolation. Finally, precipitation trends on glaciers are derived for each 4 regions based on the bias corrected and downscaled reanalysis data.The study tackles the very interesting and yet unsolved issue of high-altitude precipitation amounts, with tools from machine learning. It adds to the existing literature by focusing on glacier winter mass balances, used as a proxy for winter precipitation at high altitudes. In my opinion, this makes the topic of this study very relevant. While the analyses displayed are in general sound, I advise a revision of the paper with respect to concerns regarding the spatial generalization capability of the models and the derivation of trends, see below.
MAIN COMMENTS1 - Comparison/justification with respect to other AI techniques for bias correction and downscaling in literature : Even though the introduction describes well the existing literature on AI-based downscaling/bias correction methods, the choice of GBR is barely justified with respect to other techniques. I would have expected elements in that direction in the manuscript, especially since a section of the Discussion is entitled : '5.1 Advantages and disadvantages of gradient boosting regressors'.2 - Limits inherent to the number of available learning data :Some of the regions of interest, e.g. Canada and Central Asia, have in total less than 20 glaciers used in this study, which is an extremely low percentage of the number of glaciers that they truly host.This in my opinion strongly impedes the (spatial) generalization capability of the GBR models learned on these data, to the region of interest as a whole. Although this is not what the authors do in the paper, this is what the title suggests while mentioning the world's glaciers. I would strongly recommend to modify this misleading title, as the developed technique is in practice not applied to derive precipitation data over any glacier of the world, but is limited to (i) the regions of interest and (ii) the few glaciers with data in these regions..On top of the low sampling level for application of machine learning techniques in general, there may be furthermore a strong sampling bias in the glaciers data from WGMS, for instance towards large glaciers in the European Alps, so that the representativity of the glaciers with data w/r to the regions of interest is questionable. It follows that it is hard to know whether models or conclusions inferred solely based on these very few glaciers, are representative of the region as a whole.I very much would like the authors to comment on this."The good performance of the GBRs in terms of bias suggests that they can be used for SWE estimates over glaciers where no ground observations are available (site-independent GBRs)". Despite being better than the benchmark, the performance of site-independant GBR models is limited (Fig 9) and decreases when data of neighbouring glaciers are excluded
from the training. Considering that, and the likely sampling biases of WGMS data, I think the authors could revise this sentence.3 - Trends :In my opinion the derivation of trends based on the GBR modelled precipitation, should be accompanied with sensitivity tests to ascertain the robustness and uncertainties of this method. Typically, data-withdrawal techniques could be used on the longest time-series to evaluate the robustness/uncertainty of the trends derived when missing data are encountered. The distribution of the data gaps within the time-series (= for instance one missing season every two year, vs 20 years with data and nothing for the following 20 years) may also play a role, and it would be good to have an insight into this and possibly only derive trends for glaciers with a sufficient number data (seasons). The strong limitation of temporal extrapolation for some glaciers is highlighted l 350-l355, hence making a derivation of trends on these glaciers meaningless.MINOR COMMENTS- the GBR consider as predictors both elevation differences between reanalysis pixel and glacier site, and downscaled variables like temperature, whereby the downscaling of temperature itself mostly relies on this altitude difference. Hence there is a high redundancy in the chosen predictors. Did you test suppressing the downscaled predictors ?- the predictors in the PCA figures (4 and 5) are often barely lisible. Fig 5 could maybe join the supplemental material.- l 264-274 : could the different magnitude in factors relate to known biases / weaknesses of the reanalyses in representing different types of precipitation events ?- l 311 : "their performance is worse than the site-independent models". It is not so clear for me why : could you please explain ?- l 448 : why were more topographic predictors used in the ERA-5 GBRs than in the MERRA-2 ones ?- Fig 2 could join the Supplemental material- Fig 6 : could the absolute biases also be mentioned ?- Fig 7: a ranking of the glaciers with respect to altitude, or to the number of seasons with Bw_data, would enable to more efficiently support the analysis related to this figure, please consider this. The same applies to Fig 11.- Tables 1 and 2 could join the supplemental material- Section 5.2 : this recent literature could also be of interest : https://doi.org/10.5194/hess-24-5355-2020; https://doi.org/10.5194/essd-14-1707-2022 (update of Durand et al., 2009).
-
AC2: 'Reply on RC2', Matteo Guidicelli, 25 Jul 2022
We would like to thank the reviewer for the positive appreciation of our work and the constructive comments that will help us to improve the paper considerably.
In the following, we report our responses (bold) to the reviewer's concerns (within quotation marks).
MAIN COMMENTS
“1 - Comparison/justification with respect to other AI techniques for bias correction and downscaling in literature : Even though the introduction describes well the existing literature on AI-based downscaling/bias correction methods, the choice of GBR is barely justified with respect to other techniques. I would have expected elements in that direction in the manuscript, especially since a section of the Discussion is entitled : '5.1 Advantages and disadvantages of gradient boosting regressors'.”
We decided to use a tree-based algorithm because of its higher readability in terms of the predictors’ importance compared to other methods (e.g. neural networks). Furthermore, gradient boosting is a gradient descent algorithm, where each additional tree tries to get the model closer to the target and reduce the bias rather than the variance (which is what a random forest algorithm does). We agree, however, that more background on our choice is needed. A dedicated discussion will be added to Sec. 5.1.
“2 - Limits inherent to the number of available learning data : Some of the regions of interest, e.g. Canada and Central Asia, have in total less than 20 glaciers used in this study, which is an extremely low percentage of the number of glaciers that they truly host. This in my opinion strongly impedes the (spatial) generalization capability of the GBR models learned on these data, to the region of interest as a whole. Although this is not what the authors do in the paper, this is what the title suggests while mentioning the world's glaciers. I would strongly recommend to modify this misleading title, as the developed technique is in practice not applied to derive precipitation data over any glacier of the world, but is limited to (i) the regions of interest and (ii) the few glaciers with data in these regions.. On top of the low sampling level for application of machine learning techniques in general, there may be furthermore a strong sampling bias in the glaciers data from WGMS, for instance towards large glaciers in the European Alps, so that the representativity of the glaciers with data w/r to the regions of interest is questionable. It follows that it is hard to know whether models or conclusions inferred solely based on these very few glaciers, are representative of the region as a whole. I very much would like the authors to comment on this. "The good performance of the GBRs in terms of bias suggests that they can be used for SWE estimates over glaciers where no ground observations are available (site-independent GBRs)". Despite being better than the benchmark, the performance of site-independant GBR models is limited (Fig 9) and decreases when data of neighbouring glaciers are excluded from the training. Considering that, and the likely sampling biases of WGMS data, I think the authors could revise this sentence.”
We agree with the reviewer regarding most aspects mentioned here. In the revised paper we will more critically discuss our approaches and also demonstrate the limitations of our approach, for example in the case of a limited number of observations.
Title: We agree that the term “world’s glaciers” can be misleading. We will change the title to: “Snow accumulation over glaciers in the Alps, Scandinavia, Central Asia and Western Canada (1981-2020) inferred from climate reanalysis and machine learning”
Regarding the sentence mentioned ("The good performance of the GBRs in terms of bias suggests that they can be used for SWE estimates over glaciers where no ground observations are available (site-independent GBRs)") we fully agree that our statement was too optimistic / too general and we will better specify that the model can be applied on other glaciers, only if the glacier is in proximity to the glaciers used in the training. Moreover, we will specify that the resulting performance strongly depends on the characteristics of the glaciers with respect to the glaciers used in the training.
“3 - Trends : In my opinion the derivation of trends based on the GBR modelled precipitation, should be accompanied with sensitivity tests to ascertain the robustness and uncertainties of this method. Typically, data-withdrawal techniques could be used on the longest time-series to evaluate the robustness/uncertainty of the trends derived when missing data are encountered. The distribution of the data gaps within the time-series (= for instance one missing season every two year, vs 20 years with data and nothing for the following 20 years) may also play a role, and it would be good to have an insight into this and possibly only derive trends for glaciers with a sufficient number data (seasons). The strong limitation of temporal extrapolation for some glaciers is highlighted l 350-l355, hence making a derivation of trends on these glaciers meaningless.”
Thanks a lot, this is a very valid comment and a good suggestion.
In the trend analysis, the GBR models are applied over 41 years for all the glaciers of the study. The Bw data was only used to train the GBR models and not to derive the trends. Thus, we propose the following sensitivity test to be included in the revised manuscript: similarly to Fig. 9a, c, e and g and only for glaciers with long timeseries of Bw data, we will show the trends depending on (a) the used number of training seasons for the validated glacier and (b) the distribution of the available Bw data. The sensitivity test would thus allow us to further evaluate the general expected robustness and uncertainties of the trends depending on the number of years with available Bw data used for training. However, this is only feasible for the season-independent GBR.
In fact, trends are also derived with the site-independent GBRs, which are not affected by the number of years with available Bw data (because no Bw data of the validated glacier is used for training). The fact that the site-independent GBRs often perform better than the season-independent GBRs in terms of temporal correlation with the Bw data, is an indicator that the number of available years with Bw data does not necessarily need to be high in order to accurately represent the temporal variability of the snow accumulation over the years and thus, in order to derive trends. In the revised manuscript, we will determine the trends only for glaciers with long time-series of Bw data, i.e.: only for glaciers where the temporal correlation between the GBR models and the Bw data can be evaluated. For these glaciers, the comparison between the trends obtained from the season-independent and site-independent GBRs will allow a better discussion of the potential use of the site-independent GBRs for the derivation of trends on glaciers with no Bw data.
MINOR COMMENTS
- "the GBR consider as predictors both elevation differences between reanalysis pixel and glacier site, and downscaled variables like temperature, whereby the downscaling of temperature itself mostly relies on this altitude difference. Hence there is a high redundancy in the chosen predictors. Did you test suppressing the downscaled predictors ?"
Thanks for this interesting comment. The high correlation between predictors is only a problem for the interpretability of the predictors’ importance. However, this does not affect the performance of the GBR because decision trees are by nature not affected by multi-collinearity. If two predictors are highly correlated, the tree will choose only one of the two predictors when deciding upon a split.
As suggested, in the revised paper, we will show the changes in terms of overall model’s performance when suppressing the downscaled predictors (and/or other variables, e.g. topographical) in Sec. 4.1. This will be helpful to quantify the added value of each group of predictors. Correspondingly, Fig. 4 will be modified. In fact, we are quite confident that this will be a better evaluation of the predictors’ importance than only showing the frequency of use of the main predictors (Fig. 4a and b).
- "the predictors in the PCA figures (4 and 5) are often barely lisible. Fig 5 could maybe join the supplemental material."
Fig. 5 will be moved to the Supplementary material. We will also increase the fontsize and avoid the overlapping of predictors’ names.
-" l 264-274 : could the different magnitude in factors relate to known biases / weaknesses of the reanalyses in representing different types of precipitation events ?"
Yes, this is a good suggestion and we will invest more time in this, trying to link our results with the literature.
- "l 311 : "their performance is worse than the site-independent models". It is not so clear for me why : could you please explain ?"
The season-independent GBR model has a higher number of trees and less samples are needed to create a new leaf of the tree (i.e. to predict a different adjustment factor) than the site-independent GBR. Thanks to its higher complexity than the site-independent model, if Bw data of the validated glacier is used to train the season-independent model, this latter can learn the specific characteristics of the validated glacier and perform better than the site-independent model.
On the other hand, if no Bw data of the validated glacier is used to train the season-independent GBR, its performance is worse than the site-independent GBR, because it will overfit the training data.
This discussion will be added in shorter form to the revised paper.
- "l 448 : why were more topographic predictors used in the ERA-5 GBRs than in the MERRA-2 ones ?"
We used all the topographical predictors describing the reanalysis’s subgrid complexity of both reanalysis products and ERA-5 is providing more descriptors than MERRA-2.
- "Fig 2 could join the Supplemental material"
Yes, we agree.
- "Fig 6 : could the absolute biases also be mentioned ?"
Yes, we will also evaluate and report the mean bias error in addition to the root mean squared error. However, the figure is already too busy to allow more numbers and we will discuss the results in the text.
- "Fig 7: a ranking of the glaciers with respect to altitude, or to the number of seasons with Bw_data, would enable to more efficiently support the analysis related to this figure, please consider this. The same applies to Fig 11."
Thanks for the suggestion. We will modify Fig. 7 and Fig. 11 as proposed.
- "Tables 1 and 2 could join the supplemental material"
Yes, we agree.
- "Section 5.2 : this recent literature could also be of interest : https://doi.org/10.5194/hess-24-5355-2020; https://doi.org/10.5194/essd-14-1707-2022 (update of Durand et al., 2009)."
Thanks. We will include this literature in the discussion.
-
AC2: 'Reply on RC2', Matteo Guidicelli, 25 Jul 2022
Matteo Guidicelli et al.
Matteo Guidicelli et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
638 | 270 | 16 | 924 | 44 | 7 | 11 |
- HTML: 638
- PDF: 270
- XML: 16
- Total: 924
- Supplement: 44
- BibTeX: 7
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1