the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Global evaluation of process-based models with in situ observations to detect long-term change in lake ice
Abstract. Lake ice phenology has been used extensively to study the impacts of anthropogenic climate change, owing to the widespread occurrence of lake ice and the length of time series available for such studies. The proliferation of process-based lake models and gridded climate data have enabled the modeling of ice phenology across broad spatial scales, for example where lakes are not sampled. In this study, we used ice phenology outputs from an ensemble of lake-climate model projections to directly compare their performance with in situ data. Generally, we found that the lake models captured the range of variability of observational records (RMSE ice on = 22.9 days [4.7, 95.4]; RMSE ice off = 17.4 days [6.1, 76.5]), and particularly the long-term trends in temperate regions. However, the models performed poorly in extremely warm years or when there were rapid short-term changes in ice phenology. The location of the lakes, such as latitude and longitude, as well as lake morphology, such as lake depth and surface area, significantly influenced model performance. For example, the models performed best in shallow small lakes and worst in deep larger lakes. Our analysis suggests that the lake models tested can reliably estimate long-term trends in lake ice cover, particularly when averaged across large spatial scales, but widespread in situ observations are critical to capture extreme events.
- Preprint
(602 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
CC1: 'Comment on tc-2022-31', Zeli Tan, 15 Feb 2022
A nice work. It should be noted that the reference for ALBM should be Tan et al. (2015).
Tan, Z., Zhuang, Q., & Walter Anthony, K. (2015). Modeling methane emissions from arctic lakes: Model development and site‐level study. Journal of Advances in Modeling Earth Systems, 7, 459-483.
Citation: https://doi.org/10.5194/tc-2022-31-CC1 -
CC2: 'Reply on CC1', Sapna Sharma, 15 Feb 2022
Thanks Zeli! We'll add in the reference.
Citation: https://doi.org/10.5194/tc-2022-31-CC2
-
CC2: 'Reply on CC1', Sapna Sharma, 15 Feb 2022
-
CC3: 'Comment on tc-2022-31', Paul PUKITE, 18 Feb 2022
Don't completely understand the reliance on so few lakes for analysis. Minnesota alone has hundreds of lakes compiled on their Dept of Natural Resources site (last time I checked), with many dating before 1900. There are good statistical techniques that deal with data of various intervals so that should not be an issue. Here is what I have compiled broken down by latitude (which hasn't been updated for a few years):
Citation: https://doi.org/10.5194/tc-2022-31-CC3 -
CC4: 'Reply on CC3 (broken images)', Paul PUKITE, 06 Mar 2022
Obviously this commenting system is broken. Here are links to the images that wouldn't post
Citation: https://doi.org/10.5194/tc-2022-31-CC4 -
CC5: 'Reply on CC4 (try again)', Paul PUKITE, 06 Mar 2022
https://imagizer.imageshack.com/img706/7866/srd.gif
https://imagizer.imageshack.com/img706/9683/v4hb.jpg
Citation: https://doi.org/10.5194/tc-2022-31-CC5 -
CC6: 'Reply on CC5', Sapna Sharma, 07 Mar 2022
Dear Paul,
Thank you for your feedback. Unfortunately, I couldn't see the figures, but we actually already have acquired and use the Minnesota ice phenology data.
For this study, we used 61,690 records of in situ observations for 2,658 lakes spanning from 1874 to 2020. I'm not quite sure why the ms reads like we only used 4 lakes - we used 4 lake models with gridded modelled data worldwide, but those are modelled estimates and projections, which were compared to observations from 2,658 lakes. Perhaps in our next revision, we can include a map of all of our lake observations and highlight the difference between 4 lake models and 2,658 lakes with in situ observations?
Thanks!
Sapna
Citation: https://doi.org/10.5194/tc-2022-31-CC6
-
CC6: 'Reply on CC5', Sapna Sharma, 07 Mar 2022
-
CC5: 'Reply on CC4 (try again)', Paul PUKITE, 06 Mar 2022
-
CC4: 'Reply on CC3 (broken images)', Paul PUKITE, 06 Mar 2022
-
RC1: 'Comment on tc-2022-31', Anonymous Referee #1, 06 Mar 2022
After my assessment, I believe this manuscript by Mohammad et al should be subject to a “conditional rejection”.
The authors claimed three objectives and would like to answer “(i) how well do lake models capture the timing and duration of observed seasonal ice cover? (ii) does model accuracy differ across lake types and climatic regions?, and (iii) do lake ice models better capture long-term observed variability compared to short-term change”.
These are very good questions and I applauded the authors. However, I have difficulty understanding the methodology and the real conclusions finally drawn from this work.
My major concerns are:
- What the lake ice research society can benefit from this study?
- What does “Global evaluation” stand? The authors claim "We obtained 61,690 records of in situ observations for 2,658 lakes spanning from 1874 to 2020,” but in this study, only 4 lakes were investigated.
- “Process-based models”: I don’t see any description of what processes those lake models have been dealt with. Section 2.1.1 is very difficult to understand, perhaps except for those who were deeply involved in the project(s) mentioned by the authors. I would encourage authors to tell a lot more of those lake models and how those lakes models were implemented or driven by the CMIP5. How those lake model runs have been carried out on a “global” scale? I suppose this is linked with the massive record you have mentioned in the manuscript (see comment above). I think authors need to talk more about the overall background picture of lake ice simulation, in particular, how can we understand the “process-based models”
- Section 2.2 is also difficult to understand.
- 2.2.1 What model performance? I see statistical methods. If you meant to discuss the statistical methods/models that are planned to use to handle the lake model results and in situ observations, please be more specific and write it clearly. So far, I see a mixture of many things
- 2.2.2. Same impression as for Section 2.2.
- The entire Section 2 need a substantial major revision. The questions that need to be answered are:
- What are the lake model data, i.e., simulations, domain, data coverage and what particular lake ice parameters do you want to investigate? I see only “ice on” and “ice off”. Presumably, authors refer to freezing up date and breakup date, if so, I would prefer to use these terminologies for better clarity. How about other lake process-related parameters such as lake ice thickness, snow cover?
- What are the in-situ observations, i.e., domain, data coverage and what particular parameters do you want to use and compare with model results?
- What statistical methods/models do you want to use to applied separately or together on both lake model calculated parameters and in situ observed parameters (If I understood correctly).
- A strong argument to support that it is sufficient to investigate only 4 lakes out of 2658 lakes. Maybe it is even better to show a map of those lakes to echo the title of “Global evaluation”.
5. I see the same problem for section 3. I think Sections 2 and 3 together need a major structural change. Now both Sections are mixed with data and methodology.
6. How can we understand the “Factors affecting model performance” in both sections 2 and 3? What authors listed are connected with “machine learning” something that does not necessarily represent lake ice physics, I am not sure how those considerations connected with lake ice phenology? I think anything linked with an artificial intelligence methodology such as machine learning on the investigation of Earth Science needs extra cautious and better/clear arguments.
7. In the end, the authors concluded A) “when using these data: i) consider the relationship between lake ice and extreme climate events, ii) be cautious with predictions for regions currently without in situ representation, iii) when possible, use ensemble model approaches to reduce variability in predictions, and iv) estimate long-term trends rather than specific lake responses. and B) “For ice on, modelled estimates were often more conservative than in situ observations which predicted a later ice on date. In fact, the real-world observations had later ice on and earlier ice off dates than any of the estimates from all three of the RCP scenarios.”
My questions:
- What are “these data”? modelled and observed lake ice phenology, from those 4 lakes?
- The second point “be cautious,,,”, reminds me that authors stated: " there are an estimated 50 million lakes around the world that freeze each winter but do not have long-term observations. Thus, quantifying changes in lake ice worldwide requires modelling". So how can we “be cautious”, to what degree? I would rather see authors tell some concrete numbers such as without in situ observations, the lake model predicted lake ice phenology are likely to have AA and BB offsets of V1 and V2, for example. Where AA and BB represent the lake model parameters and V1 and V2 represent their values of them.
- Why do we need to reduce the “variability in predictions”?
- I don’t quite understand the point iv).
8. Let me copy and paste the objectives here again: “(i) how well do lake models capture the timing and duration of observed seasonal ice cover? (ii) does model accuracy differ across lake types and climatic regions? and (iii) do lake ice models better capture long-term observed variability compared to short-term change”.
So, I think:
(i) is answered; (ii) I can’t tell; (iii) While, I still can’t tell the answer for (iii), if I was told “estimate long-term trends rather than specific lake responses”.
I would encourage authors to make a resubmission focus on the comparison between lake observations and modelling and a clear picture of “Global evaluation of process-based models” and the “comparison with long-term observed lake ice phenology” so readers can get some concrete and crystal-clear final take-home knowledge of this study” to improve either their lake models or improve the in-situ observations or the climate model forcing that applied by the lake models.
I can’t recommend this manuscript to be published in TC as in the current format without substantial revision and rewriting, sorry I can’t be more positive than that.
Regards
Citation: https://doi.org/10.5194/tc-2022-31-RC1 -
RC2: 'Comment on tc-2022-31', Anonymous Referee #2, 21 Mar 2022
Global evaluation of process-based models with in situ observations to detect long-term change in lake ice
Overall, I have serious concerns about this paper and note what I consider to be fatal flaws. While the paper does present an interesting premise, I do not recommend publication at this time. I outline my major concerns here for the Authors to take into consideration for a possible publication at a later time.
In situ data:
The first thing that strikes me is the lack of information on the validation data. Where are the 2658 lakes? Are they evenly distributed through the northern hemisphere? Do they represent lakes across the entire northern hemisphere? How far north does the dataset extend? While seemingly minor to not include a map, this is actually quite a major problem. Especially with the comments about longitude being an important explanation for the RMSE, and a lack of validation data from 0 to -50 longitude. Does that not negate the results of longitude being the most important with not much validation data in the region where Figure 3 shows the highest PDP? (Also, why does the scale for ice off end in a different geographic region than ice on? Is this a difference in the geographic region of the in situ data for ice on and off?)
Figure 5 raises some serious concerns for me regarding your in situ data. Ice off in early January for the extreme years? When ice on is around the same time? does that mean those years were essentially ice free? Looking at ~1977 for Monona, it appears that ice on is about -30 (early Dec?) while ice off is -20, assuming the plots are aligned, which perhaps they are not. Why is Monona essentially ice free when Mendota is not? Isn’t Monona shallower than Mendota? How can it be essentially ice free that year when Mendota is ice covered? Even Lake Michigan was mostly frozen in the late 1970s. It also doesn’t match the records online: https://www.aos.wisc.edu/~sco/lakes/monona-dur.gif
the shortest ice season was 49 days in 97/98. Are you using different data? Something is not quite right here. Perhaps I am not interpreting Figure 5 correctly, but if that’s the case than the explanation needs to be improved.
Lake depths:
Lake depth is an extremely important variable to represent in ice modelling. The manuscript notes the use of the Global Lake Data Base for some models and 50 m depth for the CLM4.5.The GLDB used to have some assumed data based on geology or other factors where lake depths are unknown, though perhaps this has been improved in recent years. This is a very useful dataset for sure considering the lack of gridded bathymetry data available, but it is an assumption that the depths are representative in your grid cells since they are not all observation based. An acknowledgment of the uncertainty this introduces is important, perhaps a figure showing the range of depths per grid cell? Something to give the reader a sense of how representative the data set is?
Are the other models using values around 50 m for depth as well in every grid cell? Does this mean that the CLM4.5 is using 50 m for northern grid cells as well? This is unclear, and if 50 m is assumed everywhere it is most certainly not a valid assumption to make - especially for most northern latitudes in the Northern Hemisphere. This ties back into the lack of map for your study area/data. If you are only doing regions with the very large lakes in them, perhaps the 50 m is acceptable, but that is very deep for a ‘typical’ lake and would not be representative of the Northern Hemisphere in general.
Extreme events:
Oddities in your in situ data aside - 1 grid cell is not a sufficient example to make comments on extreme events. Your discussion says ENSO was responsible for some early break-ups in literature (line 265: ENSO events 265 have been attributed to several noticeably early break-ups for lakes in recent decades, such as 1972, 1982, and 1997). Those don’t appear to be the years with extreme early ice off in figure 5; what about the other extremely early dates? If you want to include an examination of extreme events I would suggest you pick several geographically different grid cells to compare and do a more thorough examination.
Minor wording issue here that jumped out and reads as if you are saying intermittent ice cover extends into the arctic and explains the low RMSE in that latitude range. Line 249-250 – explanation of lowest RMSE.
“…between 50 and 65° latitude which reflects the higher density of lakes in northern latitudes and highest abundance of lakes currently experiencing intermittent ice cover (Sharma et al., 2019).” Is this mis-cited? Or referring to a specific geographic region perhaps? 50-65 latitude in North America covers Lake Winnipeg, Lake Athabasca, Great Slave Lake. That latitude range covers the low Arctic as well where there is most definitely not intermittent ice cover. And how does intermittent ice cover tend towards the low RMSE you are explaining here? The high abundance of lakes in that region does make sense though. Perhaps revise that sentence to remove the inference of intermittent ice cover resulting in low RMSE values, or revise to better explain why it does affect them. It's unclear as written.
Citation: https://doi.org/10.5194/tc-2022-31-RC2
Status: closed
-
CC1: 'Comment on tc-2022-31', Zeli Tan, 15 Feb 2022
A nice work. It should be noted that the reference for ALBM should be Tan et al. (2015).
Tan, Z., Zhuang, Q., & Walter Anthony, K. (2015). Modeling methane emissions from arctic lakes: Model development and site‐level study. Journal of Advances in Modeling Earth Systems, 7, 459-483.
Citation: https://doi.org/10.5194/tc-2022-31-CC1 -
CC2: 'Reply on CC1', Sapna Sharma, 15 Feb 2022
Thanks Zeli! We'll add in the reference.
Citation: https://doi.org/10.5194/tc-2022-31-CC2
-
CC2: 'Reply on CC1', Sapna Sharma, 15 Feb 2022
-
CC3: 'Comment on tc-2022-31', Paul PUKITE, 18 Feb 2022
Don't completely understand the reliance on so few lakes for analysis. Minnesota alone has hundreds of lakes compiled on their Dept of Natural Resources site (last time I checked), with many dating before 1900. There are good statistical techniques that deal with data of various intervals so that should not be an issue. Here is what I have compiled broken down by latitude (which hasn't been updated for a few years):
Citation: https://doi.org/10.5194/tc-2022-31-CC3 -
CC4: 'Reply on CC3 (broken images)', Paul PUKITE, 06 Mar 2022
Obviously this commenting system is broken. Here are links to the images that wouldn't post
Citation: https://doi.org/10.5194/tc-2022-31-CC4 -
CC5: 'Reply on CC4 (try again)', Paul PUKITE, 06 Mar 2022
https://imagizer.imageshack.com/img706/7866/srd.gif
https://imagizer.imageshack.com/img706/9683/v4hb.jpg
Citation: https://doi.org/10.5194/tc-2022-31-CC5 -
CC6: 'Reply on CC5', Sapna Sharma, 07 Mar 2022
Dear Paul,
Thank you for your feedback. Unfortunately, I couldn't see the figures, but we actually already have acquired and use the Minnesota ice phenology data.
For this study, we used 61,690 records of in situ observations for 2,658 lakes spanning from 1874 to 2020. I'm not quite sure why the ms reads like we only used 4 lakes - we used 4 lake models with gridded modelled data worldwide, but those are modelled estimates and projections, which were compared to observations from 2,658 lakes. Perhaps in our next revision, we can include a map of all of our lake observations and highlight the difference between 4 lake models and 2,658 lakes with in situ observations?
Thanks!
Sapna
Citation: https://doi.org/10.5194/tc-2022-31-CC6
-
CC6: 'Reply on CC5', Sapna Sharma, 07 Mar 2022
-
CC5: 'Reply on CC4 (try again)', Paul PUKITE, 06 Mar 2022
-
CC4: 'Reply on CC3 (broken images)', Paul PUKITE, 06 Mar 2022
-
RC1: 'Comment on tc-2022-31', Anonymous Referee #1, 06 Mar 2022
After my assessment, I believe this manuscript by Mohammad et al should be subject to a “conditional rejection”.
The authors claimed three objectives and would like to answer “(i) how well do lake models capture the timing and duration of observed seasonal ice cover? (ii) does model accuracy differ across lake types and climatic regions?, and (iii) do lake ice models better capture long-term observed variability compared to short-term change”.
These are very good questions and I applauded the authors. However, I have difficulty understanding the methodology and the real conclusions finally drawn from this work.
My major concerns are:
- What the lake ice research society can benefit from this study?
- What does “Global evaluation” stand? The authors claim "We obtained 61,690 records of in situ observations for 2,658 lakes spanning from 1874 to 2020,” but in this study, only 4 lakes were investigated.
- “Process-based models”: I don’t see any description of what processes those lake models have been dealt with. Section 2.1.1 is very difficult to understand, perhaps except for those who were deeply involved in the project(s) mentioned by the authors. I would encourage authors to tell a lot more of those lake models and how those lakes models were implemented or driven by the CMIP5. How those lake model runs have been carried out on a “global” scale? I suppose this is linked with the massive record you have mentioned in the manuscript (see comment above). I think authors need to talk more about the overall background picture of lake ice simulation, in particular, how can we understand the “process-based models”
- Section 2.2 is also difficult to understand.
- 2.2.1 What model performance? I see statistical methods. If you meant to discuss the statistical methods/models that are planned to use to handle the lake model results and in situ observations, please be more specific and write it clearly. So far, I see a mixture of many things
- 2.2.2. Same impression as for Section 2.2.
- The entire Section 2 need a substantial major revision. The questions that need to be answered are:
- What are the lake model data, i.e., simulations, domain, data coverage and what particular lake ice parameters do you want to investigate? I see only “ice on” and “ice off”. Presumably, authors refer to freezing up date and breakup date, if so, I would prefer to use these terminologies for better clarity. How about other lake process-related parameters such as lake ice thickness, snow cover?
- What are the in-situ observations, i.e., domain, data coverage and what particular parameters do you want to use and compare with model results?
- What statistical methods/models do you want to use to applied separately or together on both lake model calculated parameters and in situ observed parameters (If I understood correctly).
- A strong argument to support that it is sufficient to investigate only 4 lakes out of 2658 lakes. Maybe it is even better to show a map of those lakes to echo the title of “Global evaluation”.
5. I see the same problem for section 3. I think Sections 2 and 3 together need a major structural change. Now both Sections are mixed with data and methodology.
6. How can we understand the “Factors affecting model performance” in both sections 2 and 3? What authors listed are connected with “machine learning” something that does not necessarily represent lake ice physics, I am not sure how those considerations connected with lake ice phenology? I think anything linked with an artificial intelligence methodology such as machine learning on the investigation of Earth Science needs extra cautious and better/clear arguments.
7. In the end, the authors concluded A) “when using these data: i) consider the relationship between lake ice and extreme climate events, ii) be cautious with predictions for regions currently without in situ representation, iii) when possible, use ensemble model approaches to reduce variability in predictions, and iv) estimate long-term trends rather than specific lake responses. and B) “For ice on, modelled estimates were often more conservative than in situ observations which predicted a later ice on date. In fact, the real-world observations had later ice on and earlier ice off dates than any of the estimates from all three of the RCP scenarios.”
My questions:
- What are “these data”? modelled and observed lake ice phenology, from those 4 lakes?
- The second point “be cautious,,,”, reminds me that authors stated: " there are an estimated 50 million lakes around the world that freeze each winter but do not have long-term observations. Thus, quantifying changes in lake ice worldwide requires modelling". So how can we “be cautious”, to what degree? I would rather see authors tell some concrete numbers such as without in situ observations, the lake model predicted lake ice phenology are likely to have AA and BB offsets of V1 and V2, for example. Where AA and BB represent the lake model parameters and V1 and V2 represent their values of them.
- Why do we need to reduce the “variability in predictions”?
- I don’t quite understand the point iv).
8. Let me copy and paste the objectives here again: “(i) how well do lake models capture the timing and duration of observed seasonal ice cover? (ii) does model accuracy differ across lake types and climatic regions? and (iii) do lake ice models better capture long-term observed variability compared to short-term change”.
So, I think:
(i) is answered; (ii) I can’t tell; (iii) While, I still can’t tell the answer for (iii), if I was told “estimate long-term trends rather than specific lake responses”.
I would encourage authors to make a resubmission focus on the comparison between lake observations and modelling and a clear picture of “Global evaluation of process-based models” and the “comparison with long-term observed lake ice phenology” so readers can get some concrete and crystal-clear final take-home knowledge of this study” to improve either their lake models or improve the in-situ observations or the climate model forcing that applied by the lake models.
I can’t recommend this manuscript to be published in TC as in the current format without substantial revision and rewriting, sorry I can’t be more positive than that.
Regards
Citation: https://doi.org/10.5194/tc-2022-31-RC1 -
RC2: 'Comment on tc-2022-31', Anonymous Referee #2, 21 Mar 2022
Global evaluation of process-based models with in situ observations to detect long-term change in lake ice
Overall, I have serious concerns about this paper and note what I consider to be fatal flaws. While the paper does present an interesting premise, I do not recommend publication at this time. I outline my major concerns here for the Authors to take into consideration for a possible publication at a later time.
In situ data:
The first thing that strikes me is the lack of information on the validation data. Where are the 2658 lakes? Are they evenly distributed through the northern hemisphere? Do they represent lakes across the entire northern hemisphere? How far north does the dataset extend? While seemingly minor to not include a map, this is actually quite a major problem. Especially with the comments about longitude being an important explanation for the RMSE, and a lack of validation data from 0 to -50 longitude. Does that not negate the results of longitude being the most important with not much validation data in the region where Figure 3 shows the highest PDP? (Also, why does the scale for ice off end in a different geographic region than ice on? Is this a difference in the geographic region of the in situ data for ice on and off?)
Figure 5 raises some serious concerns for me regarding your in situ data. Ice off in early January for the extreme years? When ice on is around the same time? does that mean those years were essentially ice free? Looking at ~1977 for Monona, it appears that ice on is about -30 (early Dec?) while ice off is -20, assuming the plots are aligned, which perhaps they are not. Why is Monona essentially ice free when Mendota is not? Isn’t Monona shallower than Mendota? How can it be essentially ice free that year when Mendota is ice covered? Even Lake Michigan was mostly frozen in the late 1970s. It also doesn’t match the records online: https://www.aos.wisc.edu/~sco/lakes/monona-dur.gif
the shortest ice season was 49 days in 97/98. Are you using different data? Something is not quite right here. Perhaps I am not interpreting Figure 5 correctly, but if that’s the case than the explanation needs to be improved.
Lake depths:
Lake depth is an extremely important variable to represent in ice modelling. The manuscript notes the use of the Global Lake Data Base for some models and 50 m depth for the CLM4.5.The GLDB used to have some assumed data based on geology or other factors where lake depths are unknown, though perhaps this has been improved in recent years. This is a very useful dataset for sure considering the lack of gridded bathymetry data available, but it is an assumption that the depths are representative in your grid cells since they are not all observation based. An acknowledgment of the uncertainty this introduces is important, perhaps a figure showing the range of depths per grid cell? Something to give the reader a sense of how representative the data set is?
Are the other models using values around 50 m for depth as well in every grid cell? Does this mean that the CLM4.5 is using 50 m for northern grid cells as well? This is unclear, and if 50 m is assumed everywhere it is most certainly not a valid assumption to make - especially for most northern latitudes in the Northern Hemisphere. This ties back into the lack of map for your study area/data. If you are only doing regions with the very large lakes in them, perhaps the 50 m is acceptable, but that is very deep for a ‘typical’ lake and would not be representative of the Northern Hemisphere in general.
Extreme events:
Oddities in your in situ data aside - 1 grid cell is not a sufficient example to make comments on extreme events. Your discussion says ENSO was responsible for some early break-ups in literature (line 265: ENSO events 265 have been attributed to several noticeably early break-ups for lakes in recent decades, such as 1972, 1982, and 1997). Those don’t appear to be the years with extreme early ice off in figure 5; what about the other extremely early dates? If you want to include an examination of extreme events I would suggest you pick several geographically different grid cells to compare and do a more thorough examination.
Minor wording issue here that jumped out and reads as if you are saying intermittent ice cover extends into the arctic and explains the low RMSE in that latitude range. Line 249-250 – explanation of lowest RMSE.
“…between 50 and 65° latitude which reflects the higher density of lakes in northern latitudes and highest abundance of lakes currently experiencing intermittent ice cover (Sharma et al., 2019).” Is this mis-cited? Or referring to a specific geographic region perhaps? 50-65 latitude in North America covers Lake Winnipeg, Lake Athabasca, Great Slave Lake. That latitude range covers the low Arctic as well where there is most definitely not intermittent ice cover. And how does intermittent ice cover tend towards the low RMSE you are explaining here? The high abundance of lakes in that region does make sense though. Perhaps revise that sentence to remove the inference of intermittent ice cover resulting in low RMSE values, or revise to better explain why it does affect them. It's unclear as written.
Citation: https://doi.org/10.5194/tc-2022-31-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
744 | 345 | 57 | 1,146 | 42 | 44 |
- HTML: 744
- PDF: 345
- XML: 57
- Total: 1,146
- BibTeX: 42
- EndNote: 44
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1