West Greenland ice sheet retreat history reveals elevated precipitation during the Holocene thermal maximum

We investigate changing precipitation patterns in the Kangerlussuaq region of west central Greenland during the Holocene thermal maximum, using a new chronology of ice sheet terminus position through the Holocene and a novel inverse modeling approach based on the unscented transform (UT). The UT is applied to estimate changes in annual precipitation in order to reduce the misfit between modeled and observed terminus positions. We demonstrate the effectiveness of the UT for time-dependent data assimilation, highlighting its low computational cost and trivial parallel implementation. Our results 5 indicate that Holocene warming coincided with elevated precipitation, without which modeled retreat in the Kangerlussuaq region is more rapid than suggested by observations. Less conclusive is if high temperatures during the HTM were specifically associated with a transient increase in precipitation, as the results depend on the assumed temperature history. The importance of precipitation in controlling ice sheet extent during the Holocene underscores the importance of Arctic sea ice loss and changing precipitation patterns on the future stability of the GrIS. 10


Ice-sheet model
We use the 1D, isothermal, flowline model with higher-order momentum balance described in Brinkerhoff et al. (2017). The 5 momentum conservation equations are simplified using the Blatter-Pattyn approximation, assuming hydrostatic pressure and negligible vertical resistive stresses (Blatter, 1995;Pattyn, 2003). Default parameter values used in this work are specified in Table 1.
We adopt a linear sliding law of the form (1) data, combined with the retreat chronology in Young et al. (2019, In Review), indicate that ice remained grounded on both the northern and southern flowlines from 11.6 ka BP onward.
2.3 Positive degree day model 5 Surface mass balance is estimated using a positive degree day (PDD) model (Johannesson et al., 1995). Annual surface mass balance is constructed in the PDD model using estimates of average monthly precipitation and temperature. Inputs into the To assess the sensitivity of modeled retreat history to temperature, we perform experiments using temperature reconstructions from both Buizert et al. (2018) and Dahl-Jensen et al. (1998). Surface temperature T is computed monthly as where S m is the modern surface elevation, and α = 5 • C km −1 is the lapse rate. Following Ritz et al. (2001); Cuzzone et al.
The first term in this relation accounts for changes in precipitation solely due to changes in temperature. Here λ p = 0.07, which results in a 7% increase in precipitation for ever 1 • C increase in temperature above modern (Abe-Ouchi et al., 2007;Ritz et al., 2001).
The term P t does not capture the effects of changes in atmospheric circulation or other unknown climate factors caused 30 dynamic, regional changes in Holocene precipitation. Hence,Here, we introduce a precipitation anomaly function ∆P , analogous to the temperature anomaly function ∆T . This time-dependent function , which has units of meters water equivalent (m.w.e) a −1 , is used to adjust precipitation in order to reduce mismatch between modeled and observed terminus positions.
Unlike ∆T , which is known from extensions of ice core data (Buizert et al., 2018), ∆P will be used as a control variable to be Positive degree days and snowfall are computed month-by-month based on mean monthly temperature and precipitation 5 (Johannesson et al., 1995). Snow is melted first at a rate of 5 × 10 −3 m.w.e. per degree day followed by ice at a rate of 8 × 10 −3 m.w.e. per degree day. Snow melt is initially supposed to refreeze in the snowpack as superimposed ice. Runoff begins when the superimposed ice reaches a given fraction (60%) of the snow cover (Reeh, 1991).

Model Limitations
A limitation of our modeling approach is that we do not account for potential ice dynamical effects caused by changes in melt 10 runoff or subglacial drainage. Modeling melt water runoff would be difficult in a flowline model due to flux in and out of the path of ice flow. Another limitation of our model is that it is isothermal. Unless ice temperature is treated in a vertically averaged sense, resolving temperature would require a 2D mesh, which would considerably increase the computational cost of the model. We consider the consequences of this simplification in Section 3.5, by testing sensitivity to the ice hardness parameter.

Data Assimilation Approach
In order to assess the initial mismatch between modeled and observed terminus positions, we perform a reference experiment using temperature anomaly reconstructions from Buizert et al. (2018) and Dahl-Jensen et al. (1998), and a precipitation anomaly Table 2. Citations for the primary data sets used in this work.

Data Citation
Terminus position chronology Young et al. (2019, In Review) Bedrock elevation Morlighem et al. (2017) Modern ice surface velocity Mouginot et al. (2017) Modern precipitation Box ( ∆P = 0.In order to assess the initial mismatch between modeled and observed terminus positions, we perform a reference experiment using temperature anomalies from citetBuizert2018 and a precipitation anomaly ∆P = 0 (Section 3.1). To improve the fit from these initial runs, we assimilate terminus position observations to obtain improved estimates of the Holocene 5 precipitation anomaly.
Previous modeling studies indicate that Holocene retreat in land-terminating sectors of the GrIS were dominated by surface mass balance rather than ice dynamics (Cuzzone et al., 2019;Lecavalier et al., 2014). Thus, uncertainty in Holocene climate and thus surface mass balance is likely the primary cause of discrepancies between modeled and observed terminus positions.
In principle, both temperature and precipitation anomalies could be tuned to improve the fit between modeled and observed 10 terminus positions, but precipitation is more poorly constrained.
In the upcoming sections, we outline our approach for estimating precipitation anomalies given measured terminus positions.
We introduce a framework for time-dependent data assimilation based on the unscented transform (UT) (Julier and Uhlmann, 1997). Section 2.5.1 outlines the basic tenets of the unscented transform. Sections 2.5.2 -2.5.6 outline how the unscented transform can be applied directly to the problem of time-dependent data assimilation.

Overview of the Unscented Transform
Suppose that x x x is an n-dimensional Gaussian random variable with mean vector x 0 x 0 x 0 and covariance matrix P x , and F : R n → R m is a nonlinear function. We would like to compute the meanȳ ȳ y and covariance matrix P y of the transformed random variable

20
In general, the mean and covariance of the non-Gaussian probability distribution P (y y y) can be approximated using Markov chain Monte Carlo (MCMC) methods such as the Metropolis-Hastings algorithm (Chib and Greenberg, 1995). As a computationally efficient alternative to MCMC methods, Julier and Uhlmann (1997) introduced a method for approximating the mean and covariance of P (y y y) called the unscented transform (UT) 1 .
A set of vectors, called sigma points, are chosen with the same weighted sample mean and weighted covariance structure 5 as x x x. There are many algorithms for generating sigma points sets with different numbers of points. A commonly used set of 2n + 1 sigma points is given by with corresponding weights given by (7)

10
The notation √ P x i refers to the i-th row of a matrix square root (typically computed by Cholesky factorization) of P x , and κ is a free parameter controlling the scaling of the sigma points around the mean. Julier and Uhlmann (1997) recommend a default value of κ = 3 − n. However, κ can be fine tuned to reduce prediction errors for a given problem.
The nonlinear function F is applied to each sigma point to yield a set of transformed points

15
The mean and covariance of P (y y y) are then estimated as weighted sums This algorithm is illustrated in Figure 2. 20 The unscented transform is known primarily in the context of the unscented Kalman filter. However, the UT can be applied more generally as an alternative to traditional MCMC methods. Sigma points and weights sets are designed to estimate the first two statistical moments (mean and covariance) of the non-Gaussian distribution P (y y y) using a minimal number of function evaluations.

Data Assimilation using the Unscented Transform
Here, we outline how the unscented transform is applied to estimate precipitation anomalies. Time-dependent data assimilation using the UT involves running the ice sheet model with a set of different precipitation anomaly histories, each corresponding to 5 a different sigma point. This is followed by a post-processing step, which incorporates the ice sheet terminus chronology data via a Kalman filter type correction of the prior mean vector and covariance matrix. Implementation of the unscented transform is straightforward and easily parallelizable since each model run is independent.
In what follows, the notation x x x ∼ N (x 0 x 0 x 0 , P x ) means that the random variable x x x is Gaussian or normally distributed with mean x 0 x 0 x 0 and covariance P x . Let ∆p ∆p ∆p be an n-dimensional discretization of the unknown precipitation anomaly history ∆P 10 at regular time intervals, be an m-dimensional vector of glacier length observations through time, and F : R n → R m be a nonlinear function that maps temperature and precipitation forcings to glacier length measurements via the ice dynamics and PDD models. Given a multivariate Gaussian prior ∆p ∆p ∆p ∼ N (∆p 0 ∆p 0 ∆p 0 , P 0 ) with mean ∆p 0 ∆p 0 ∆p 0 and covariance matrix P 0 , we would like to estimate the mean and covariance of the posterior distribution where r ∼ N (0, R) represents the measurement noise. The unscented transform is applied to estimate the joint distribution of [∆p ∆p ∆p, ] as a multivariate Gaussian distribution. To reduce computational costs, we use a minimal set of n + 1 sigma points P i P i P i with corresponding weights w i generated using the method presented in Menegaz et al. (2011). Their method includes one free 20 parameter 0 < w 0 < 1, which can be tuned to reduce prediction errors. Parameter values related to the UT, including w 0 , are displayed in Table 1.
Sigma points are propagated through the model to obtain transformed points L i L i L i = F(P i P i P i ). In this context, the sigma points P i P i P i correspond to different time-dependent precipitation anomaly histories, while the transformed points L i L i L i correspond to the resulting glacier length histories given those precipitation anomalies as input ( Figure 3). The prior mean and covariance matrix 25 used to generate the sigma points in Figure 3 are discussed in Sections 3.2 and 2.5.3 respectively. The sawtooth structure of the sigma points reflects the mathematical formulation of the Menegaz et al. (2011) sigma points, rather than some underlying assumption about the structure of the precipitation anomaly.
Using the UT, we obtain the following approximation to the joint distribution Matrices S and C are known as the measurement covariance and cross covariance respectively. Given a measurement 0 0 0 , the joint distribution then easily yields a Gaussian approximation of the posterior distribution for ∆p ∆p ∆p| 0 0 0 (Sarkka, 2013). Letting we have

15
where ∆p ∆p ∆p and P are approximations of the posterior mean and covariance respectively. Thus, ∆p ∆p ∆p is a discrete approximation of the optimal precipitation anomaly history. Readers familiar with the Kalman filter will recognize that ∆p ∆p ∆p and P are computed using a standard Kalman update step given measurement 0 0 0 and Kalman gain K (Sarkka, 2013).
It is worth noting that unlike a standard filtering approach to data assimilation, all measurement data is incorporated simultaneously rather than time step by time step. For that reason, the Kalman update step corrects the entire time-dependent 20 precipitation history at once. Moreover, unlike in Kalman smoothing, we approximate the full posterior distribution rather rather than the probability distributions where ∆p ∆p ∆p = [∆p 1 , · · · , ∆p n ] T . Note that here, the variables ∆p k with k = i are marginalized out of the distributions. The use of time-dependent sigma points distinguishes our approach from standard Kalman filtering or Kalman smoothing approaches.

Gaussian Markov Random Field Prior for Regularization
We use a Gaussian Markov random field (GMRF) prior to control the temporal smoothness of ∆P (Bardsley, 2013). The inverse covariance or precision matrix Q of the prior takes the form Larger δ values produce smoother optimized solutions, whereas smaller values allow for more variation from the prior mean.

Measurement Mean and Variance
Observations of terminus position are available roughly every 1000 years between 11.6 and 7.2 ka BP, with a gap from 7.2 ka 10 BP to present. In contrast, model time steps are on the order of months. Due to these disparate time scales, we use the following procedure to estimate the measurement mean 0 0 0 and covariance matrix R on a time scale more appropriate for the ice sheet model.
A set of candidate retreat histories is generated by drawing random samples from a multivariate Gaussian distribution with the same covariance structure as the GMRF prior outlined in 2.5.3. Vector samples from this distribution contain glacier lengths 15 at a set of discrete time points. The GMRF covariance matrix results in relatively smooth retreat histories. The set of candidate retreat histories are resampled in such a way that the mean formation age and variance for each moraine approximately matches observations. We randomly sample curves from a multivariate Gaussian with the same covariance structure as the GMRF prior outlined in Section 2.5.3. These retreat histories are resampled in such a way that the mean formation age and variance for each moraine approximately matches observations. The average length and variance of these curves is computed at a series of time 20 slices to obtain a plausible measurement mean 0 0 0 and a diagonal measurement covariance matrix R (Figure 4).

Iterative Optimization Procedure
The state vector ∆p ∆p ∆p contains values of ∆P at roughly 250 year intervals from 11.6 ka BP to present. Temporally sparse precipitation anomaly values are linearly interpolated for input into the ice sheet model (Figure 3). Using a low temporal resolution discretization of ∆P reduces the total number of forward model runs required for the unscented transform. In all, Optimizations are conducted in multiple passes. In the first pass, the measurement covariance matrix R is multiplied by a factor of 1/4 so that the measurements are initially weighted more heavily than the prior. This produces a reasonable fit to the data, even given a poor initial estimate of ∆P . The optimal ∆P from a given iteration is used as the prior mean in the next iteration. We use the same prior covariance matrix P 0 for regularization in each iteration. After two to three iterations, modeled and observed terminus positions match within measurement uncertainty (Section 3). In our experience, the results of iteration 5 are not dependent on the choice of prior mean in the first iteration, but convergence can be improved by choosing a sensible initial guess as in Section 3.2.

Approach to Sensitivity Testing
The approach to data assimilation described in the previous sections accounts for measurement uncertainty, but not model uncertainty. The data assimilation procedure can be modified to account for uncertainties in the ice flow and PDD model 10 parameters. We define an augmented state vector where θ θ θ is a vector of scalar parameters including the natural logarithm the rate factor for ice, basal traction, a parameter controlling precipitation scaling with temperature, and the PDD melt rate parameters for ice and snow

15
The prior distribution for the augmented state vector is given by where θ 0 θ 0 θ 0 is the parameter mean vector and Θ is a diagonal matrix containing parameter variances.
The unscented transform is applied to the augmented function =F(u u u) + r r r (26) 20 to obtain estimates of the joint distribution for [∆p ∆p ∆p, θ θ θ, ] and the conditional distribution for [∆p ∆p ∆p, θ θ θ] | 0 0 0 . Since parameters are included as state variables, sigma points reflect a variety of precipitation histories and parameter sets. A model run for a particular sigma point is initialized from an appropriate steady state using the parameter set for that point.

Model Initialization
Model runs are initialized by tuning the precipitation anomaly to obtain a steady state at 12.6 ka BP, with a margin position 5 25 km beyond the 11.6 ka BP moraine. We invert for a precipitation anomaly time series that forces a retreat of 5 km over 1000 years to obtain an initial ice sheet configuration with the correct terminus position at 11.6 ka BP. This initialization procedure is intended to ease the ice sheet out of steady state in order to avoid strong transient effects at the beginning of model runs.

Reference Experiment
Here, we force the model with monthly temperatures from Buizert et al. (2018) and Dahl-Jensen et al. (1998) (Figure 6 a, b), 5 and a zero precipitation anomaly. Ice retreats rapidly through the Kangerlussuaq region during the early Holocene. By 10 ka BP, the terminus has retreated inland of the present day margin on both the northern and southern flowlines (Figure 6 e).

Buizert Temperature Inversion
We estimate precipitation anomalies on the northern and southern flowlines using the Buizert et al. (2018) temperature reconstruction. Given the rapid retreat in the reference experiment, we expect that additional precipitation will be required in the 10 early Holocene in order to match observed terminus positions. Therefore, in the first round of optimization, we assume a prior mean of the form where τ is a rescaled time variable that increases from zero at 11.6 ka BP to one at 0 ka BP. The results of the iterative optimization procedure are insensitive to the prior mean selected for the first iteration. A wide region around the mean is 15 explored in each iteration due to the spread of the sigma points.
Positive precipitation anomalies are predicted throughout the Holocene (Figure 6 c, d). Assuming the Holocene thermal maximum extends from 10 to 6 ka BP as in Buizert et al. (2018), average HTM snowfall is 42% greater than modern on the northern flowline and 26% greater on the southern flowline (Figure 7 a). However, average snowfall during the HTM is lower than the overall Holocene average due to high precipitation from 11.6 to 10 ka BP.  (Figure 6 b). Snowfall peaks during the HTM at 43% and 35% above modern on the northern and southern flowlines respectively. Average snowfall during the HTM is 25% greater than modern, whereas overall average Holocene snowfall is roughly equivalent to modern (Figure 7 b). Consequently, there is an evident relationship between HTM warming and increased snowfall.

Uncertainty Estimates during the HTM
The unscented transform is a third order accurate method for estimating the mean of a transformed Gaussian random variable, but only first order accurate for estimating covariance (Sarkka, 2013). Hence, while the unscented transform tends to be a 5 good approximation for many nonlinear functions, it is not a full-fledged substitute for MCMC methods, which can compute expectation integrals to higher levels of accuracy. Due to this limitation, ensemble filters and the unscented Kalman filter may underestimate covariance for highly nonlinear problems (Luo and Hoteit, 2013). Practitioners will sometimes artificially inflate covariance estimates obtained from unscented or ensemble Kalman filters in order to prevent filter divergence and increase accuracy (Anderson and Anderson, 2002).

10
In an effort to obtain conservative uncertainty estimates, we compare the unscented transform to a higher order cubature method that is second order accurate for estimating the covariance (Appendix A). Since the fifth order method is more compu- Mean precipitation anomaly estimates are nearly identical between the UT and fifth-order cubature methods. However, the UT appears to underestimate uncertainty during the latter half of the HTM (Figure 8). It is possible that other sigma point sets might capture the covariance structure better than the minimal sigma point from Menegaz et al. (2011). Nonetheless, the minimal sigma point set performs reasonably well considering its low computational cost.

Sensitivity Testing
We assess the sensitivity of Holocene precipitation anomalies to parameter uncertainties by performing an HTM inversion using the methodology described in Section 2.5.6. To obtain accurate uncertainty, estimates we use the 5th order cubature  Table 1.
The estimated precipitation anomaly in this experiment is within 7 cm water equivalent per year of that obtained in Section 3.4 (Figure 9). Parameter uncertainties contribute to higher uncertainty in the estimated precipitation anomaly, particularly at the beginning of the HTM. Even considering this uncertainty, the general pattern of precipitation changes is consistent with previous inversions that assumed fixed parameter values. Average HTM snowfall is around 30% greater than modern. Due to high snowfall in the early Holocene (nearly 100% higher than modern) and a dip in HTM precipitation associated with the 8.2 ka BP cold event, the HTM average nearly matches the 10 overall Holocene average. In contrast, inversions using the Dahl-Jensen temperature reconstruction show a clear trend between HTM warming and increased precipitation, and HTM snowfall is around 25% higher than the Holocene average. Estimated precipitation anomalies during the HTM are not particularly sensitive to uncertainties in the ice sheet or PDD model parameters, or to the initialization procedure ( Figure 9). In the sensitivity test in Section 2.5.6, model runs are initialized 25 using one of a set of several steady states using different parameter combinations. In Section 3.4, which compares the UT to a fifth order cubature method, model runs are initialized from a single transient ice sheet state using output from a previous inversion. Despite these differences in model initialization, and the effects of parameter uncertainties, estimated HTM precipitation anomalies are comparable in both inversions.
Unexpectedly, using the Dahl-Jensen et al. (1998) reconstruction results in a non-zero precipitation anomaly at 0 ka BP.

The Unscented Transform as a Data Assimilation Method
Significant strides have been made in time-dependent data assimilation in glaciology using adjoint based methods. Goldberg 5 and Heimbach (2013) infer the initial thickness and basal conditions for a synthetic ice sheet given snapshots of ice thickness at discrete times. Larour et al. (2014) demonstrate a data assimilation framework within the Ice Sheet System Model (ISSM), capable of obtaining temporal estimates of surface mass balance and basal friction given surface altimetry.
In contrast to these adjoint based approaches, the unscented transform does not require computing the Jacobian or Hessian of an objective function, or special checkpointing code for time-dependent problems. Adjoint based methods are advantageous 10 for extremely high-dimensional problems, as the required number of model runs is independent of the number of parameters.
However, the UT provides more accurate uncertainty estimates than linearization (Julier and Uhlmann, 1997). Hessian information can be used to improve uncertainty estimates (Isaac et al., 2015), but to our knowledge, this approach has not been applied to time-dependent problems in glaciology.
The unscented transform has a number ofsignificant advantages over Markov chain Monte Carlo methods for inference infer-15 ence problems with tens to hundreds of parameters. In this work, we optimize for n = 44 parameters representing precipitation anomaly values at discrete points in time.some problems. Since function evaluations at each sigma point are independent, the UT is trivially parallelizable. Consequently, on a high end desktop, the iterative optimization process requires computation time equivalent to three successive forward model runs.
MCMC methods can be parallelized to some extent by utilizing multiple interacting Markov chains running in parallel (e.g. Exploring surrogate modeling approaches to solving inference problems in glaciology is a promising avenue for future research. The potential accuracy of the unscented transform is limited since it uses a predetermined number of points. In our case, using a minimal set of n + 1 sigma points for n unknown parameters yields accurate mean estimates but appears to underestimate covariance when compared to a higher order cubature method (Figure 7). MCMC methods can provide more accurate estimates given sufficient computation time. In theory, MCMC methods are more viable than the UT for extremely high dimensional 30 problems since their convergence rate is independent of dimension. In practice, however, the number of function evaluations needed for MCMC methods makes them intractable for some problems.

Modeling Conclusions
Our work follows a number of previous observationally constrained paleo ice sheet modeling studies (e.g. Tarasov and Peltier, 2002;Lecavalier et al., 2014;Calov et al., 2015). Perhaps most relevant to this work is Lecavalier et al. (2014), who model the deglaciation of Greenland from the Last Glacial Maximum using a 3-D thermomechanically coupled ice sheet model. Model runs in Lecavalier et al. (2014) are informed by constraints on relative sea level, ice core thinning, and LGM ice sheet extent.
In contrast to previous modeling studies, the computational efficiency of the flowline model outlined in Brinkerhoff et al.

5
(2017) makes time-dependent data-assimilation, sensitivity testing, and robust uncertainty estimation tractable. Sensitivity tests indicate that precipitation anomaly inversions are consistent when accounting for parameter uncertainties in the ice-dynamics model. This result supports earlier findings showing that modeled Holocene retreat in land terminating regions is more sensitive to surface mass balance than other factors like the flow law or basal sliding (Cuzzone et al., 2019;Lecavalier et al., 2014).
A drawback of our modeling approach is that we cannot account for inherently map plane effects such as changes in ice flow 10 direction, or convergent and divergent flow in or out of the assumed flow path. These factors likely contribute to discrepancies in estimated precipitation anomalies between the northern and southern flowlines ( Figure 6).
Reconstructed precipitation anomaly histories, along with their uncertainties, likely bracket the range of physically plausible values. However, this is difficult to assess without a more rigorous accounting of temperature uncertainty. In this work, we do not treat temperature as a random variable with its own covariance structure. Arctic warming and changing sea ice extent are expected to cause similar changes in precipitation that will influence future GrIS mass balance. According to Bintanja and Selten (2014), declining sea ice may cause an increase in net surface mass balanceaccumulation over areas of Arctic land ice. Similarly, Singarayer et al. (2006) link increased Arctic warming and sea-25 ice decline to increased precipitation, projecting precipitation will increase by more than 50% by the end of the century.
Code availability. The flowline ice sheet model and PDD model used in this work are available at https://github.com/JacobDowns/flow_ model.

Appendix A: A Higher Order Method for Estimating Covariance
Estimating the mean and covariance of the posterior distribution requires approximating Gaussian weighted expectation inte-30 grals of the form via numerical integratation rules, also called cubature rules, which approximate the expectation integral as a weighted sum General Gaussian weight functions N (x 0 x 0 x 0 , P x ) are handled by changing variables. Letting √ P x be a matrix square root of the 5 covariance matrix, we have Cubature rules, including the unscented transform, are constructed to exactly integrate polynomial functions F(x x x) up to a certain degree d. Suppose that x x x = [x 1 , x 2 , · · · , x n ] T is a point in R n . A monomial of degree d refers to a function x i1 1 x i2 2 · · · x in n where the exponents are non-negative integers that sum to d. A polynomial of degree d is a linear combination of monomials 10 with highest degree d.

15
The notation full sym F(· · ·) refers to a sum of the function F evaluated at all points in the fully symmetric set generated by the given point.
Due to the symmetry of the sigma points and the Gaussian weight function, all moments (that is, integrals of Gaussian weighted monomial functions) containing an odd order exponent are automatically satisfied. Exploiting this fact, and the symmetries of the sigma points, it can be shown that satisfying the remaining moment constraint equations up the fifth order 20 reduces to solving the following system of four equations in four unknowns w 1 , w 2 , w 3 and λ By slightly modifying this cubature rule Q[F(x x x)] = w 1 F(0, 0, · · · , 0) +w 2 full sym F(λ 1 , 0, · · · , 0) +w 3 full sym F(λ 2 , λ 2 , 0, · · · , 0) we introduce a new free parameter λ 2 that allows scaling of the sigma points about the mean. The new moment constraint 5 equations become Using that E with n > 4 and n − λ 2 2 − 1 = 0. A drawback of the original cubature rule, as well the modification of it presented here, is that 10 it requires negative weights, which can lead to numerical instability.
Author contributions. JD was the lead author of this manuscript. JJ and JC assisted JD with ice sheet modeling efforts. JB, NY, and AL provided the moraine age data used in this work. JD and JJ wrote the manuscript with input from all authors.
Competing interests. The authors declare that they have no conflict of interest.