Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-16T23:20:32.404Z Has data issue: false hasContentIssue false

An inverse problem in glacial geology: The reconstruction of glacier thinning in glacier bay, alaska between A.D. 1910 and 1960 from relative sea-level data

Published online by Cambridge University Press:  30 January 2017

James A. Clark*
Affiliation:
Cooperative Institute for Research in Environmental Sciences, University of Colorado/NOAA, and Institute of Arctic and Alpine Research, University of Colorado, Boulder, Colorado 80309, U.S.A.
Rights & Permissions [Opens in a new window]

Abstract

Inverse theory is applied to relative sea-level data to reconstruct a glacial history that is consistent with the emergence data near Glacier Bay, Alaska over the past half century. A comparison between the predicted glacial thinning and observed thinning indicates that a combination of elastic uplift of the ground and a fall in the geoid defining the ocean surface, causes 25% to 35% of the observed emergence of the coast line. Because of errors in the sea-level data, inverse theory cannot provide a unique solution in terms of glacier response, and, in fact, the least-squares fit is very inaccurate. However, by considering (1) the error of fit to the data, (2) the accuracy of the model, (3) the variation in the model, and (4) the model resolution, a physically realistic glacial history can be deduced. The advantage of inverse calculations is not only in its efficient means of finding a model that fits the data, but more importantly, it provides a means of assessing the reliability of the model by indicating the accuracy of the model and the range of other glacial histories that also fit the data. This approach also allows an estimate of the extent to which each data point constrains the model, independent of the actual data values. From this information, the most useful areas for additional data collection may be delimited.

Résumé

Résumé

La théorie inverse est appliquée aux données sur le niveau relatif de la mer pour reconstruire une histoire de la glaciation qui soit cohérente avec les données sur l’émergence continentale près de Glacier Bay, Alaska, depuis le dernier demi-siècle. Une comparaison entre l'amincissement glaciaire prévu et l'amincissement observé indique qu'une combinaison de la remontée élastique du sol et d'une baisse du géoïde définissant la surface de l'Océan cause de 25% à 35% de l’émergence observée le long de la côte. En raison des erreurs sur les données concernant le niveau de la mer, la théorie inverse ne peut donner une solution unique pour la réponse du glacier et, en réalité, l'ajustement par les moindres carrés est largement imprécis. Cependant, en considérant (1) l'erreur d'ajustement sur les données, (2) la precision du modèle, (3) la variation dans le modèle et (4) la resolution du modèle, on peut reconstituer physiquement une histoire réaliste de la déglaciation. L'avantage des calculs inverses ne reside pas seulement dans son efficacité pour trouver un modèle qui rende compte des données, mais, et c'est plus important, as donnent une mesure de la fiabilité du modèle en indiquant la precision de ce modèle et la gamme des autres chronologies de déglaciation qu'expliquent aussi les données. Cette approche permet aussi d'estimer jusqu'à quel point chaque point donne modifie le modèle, indépendamment de la véritable valeur de cette donnée. A partir de cette information, on peut délimiter la zone la plus intéressante pour le recueil de nouvelles données.

Zusammenfassung

Zusammenfassung

Zur Rekonstruktion einer Gletschergeschichte, die mit der Küstenhebung nahe der Glacier Bay in Alaska während der letzten 50 Jahrc übereinstimmt, wird die Umkehrtheorie auf relative Meeresspiegeldaten angewandt. Ein Vergleich der berechneten Gletscherabnahme mit der beobachteten zeigt, dass eine Kombination von elastischem Aufsteigen des Untergrundes und einer Senkung des Geoids, definiert als Meeresoberfläche, 25-35% der beobachteten Küstenhebung bewirkt. Infolge von Fehlern in den Meeresspiegeldaten kann die Umkehrtheorie keine eindeutige Lösung in Abhängigkeit vom Gletscherverhalten liefern, und tatsächlich ist das Ausgleichungsergebnis nach der Methode der kleinsten Quadrate sehr ungenau. Doch lässt sich unter Beachtung (1) des Fehlers der Ausgleichung gegen die Daten, (2) der Genauigkeit des Modells, (3) der Veränderung im Modell und (4) der Auflösung des Modells eine physikalisch vernünftige Gletschergeschichte herleiten. Der Vorteil der Umkehrberechnungen liegt nicht nur in ihrer Eignung zum Auffinden eines Modells, das zu den Daten passt, sondern noch mehr in der Möglichkeit, mit ihnen die Zuverlässigkeit des Modells durch Angabe seiner Genauigkeit und des Bereichs anderer Chronologien, die ebenfalls zu den Daten passen, abzuschätzen. Bei diesem Vorgehen lasst sich auch das Ausmass des Zwanges abschätzen, den jeder Beobachtungspunkt, unabhängig vom tatsächlichen Beobachtungswert, auf das Modell ausübt. Diese Aussage kann zur Bestinunung jener Gebiete herangezogen werden, in denen die Beobachtung weiterer Daten am wirkungsvollsten ist.

Type
Research Article
Copyright
Copyright © International Glaciological Society 1977

Introduction

During the past century relative sea-level has fallen rapidly near Glacier Bay, Alaska. Although this rapid emergence of the coast line, exceeding 3 cm/year, is occurring in a region of tectonic activity, continuous tide-gauge records indicate that the emergence is not related to earthquakes (Reference Hicks and ShofnosHicks and Shofnos, 1965). Since the numerous glaciers near Glacier Bay have retreated very rapidly during this same time period, Reference Hicks and ShofnosHicks and Shofnos (1965) suggested glacial control. The rate of emergence is comparable to that observed in Hudson Bay, which is known to be a consequence of post-glacial isostatic uplift following the retreat of the Laurentide ice sheet, so they argued for a viscous relaxation mechanism beneath Glacier Bay also. Viscous flow of the mantle undoubtedly contributes to the rapid emergence, but we demonstrate here that instantaneous elastic uplift of the ocean floor and changes in the geoid, defining the ocean surface, are also important. The elastic properties of the Earth and the classical effects of gravity are understood much better than the Earth's viscous properties, so these instantaneous causes should be considered before time-dependent viscous causes.

Here we assume that the entire sea-level response results from these immediate effects, and we determine the amount and distribution of glacial thinning required to cause the observed sea-level change. Given a glacial history, we can calculate the resulting sea-level change everywhere (Reference Farrell and ClarkFarrell and Clark, 1976), but this calculation, termed the “forward calculation”, is not our goal. Rather, we want to determine the glacial history from the observed sea-level changes.

A general approach to inverse problems was first proposed by Reference Backus and GilbertBackus and Gilbert (1967, Reference Backus and Gilbert1968, Reference Backus and Gilbert1970) and their method has subsequently been used to study a variety of geophysical problems (e.g. Reference ParkerParker, 1970; Reference Engdahl and JohnsonEngdahl and Johnson, 1974; Reference Minster, Minster, Jordan Molnar and HainesMinster and others, 1974; Reference AkiAki and others, 1977; Reference PeltierPeltier, 1976). Our development closely follows an extension of this procedure discussed thoroughly by Reference JacksonJackson (1972), Reference WigginsWiggins (1972), and Reference ParkerParker (1977). To solve the inverse problem requires finding a glacial history that fits the relative sea-level data sufficiently well. If the physics of the problem is adequately described, with realistic assumptions, then it is reasonable to suppose that a glacial history exists for the model. We might discover the correct history by trial-and-error, that is, by subjectively varying the glacial history until the forward calculation predicts sea-level changes that are sufficiently close to the measured data. Or we might select glacial histories at random until we accidentally discover one that is suitable. Here we apply a more efficient and systematic approach that uses the observed emergence to construct a glacial history that is consistent with the data, but the success of this construction does not imply that the inverse problem is solved. Because of errors in the emergence data, the glacial history that best fits the data is not unique and this non-uniqueness is the main problem addressed by inverse theory. Other glacial histories might also predict emergences that lie within the errors of the data. Inverse theory provides an estimate of the degree of departure from uniqueness, and hence indicates how much we really know about the actual glacial history. Because of these uncertainties, we can only consider models of the glacial history. Also errors in the data will propagate to the ice model, and the inverse theory shows how to calculate these errors in the model. Finally, all data are not equally important in constraining the ice model. The use of inverse theory allows us to determine the relative importance of each datum, giving insight into the model and providing a means of determining the best localities for data collection when designing a field project.

Emergence data

Reference Hicks and ShofnosHicks and Shofnos (1965) reported the average rate of emergence at 31 localities surrounding Glacier Bay. Five of these emergence rates were obtained from continuous tide-gauge records, and the remaining ones were determined by re-occupying, in 1959 and 1960, benchmarks that were established between 1887 and 1940. All but three of these data are used here.

We discard emergence data at Yakutat because the continuous sea-level record indicates an abrupt rise of sea-level at the time of the 1958 Alaska earthquake (Richter magnitude 8). The benchmark in Bartlett Cove, near the Glacier Bay National Park Service headquarters, is on Quaternary alluvium and does not have a continuous record. It may have been disturbed during the 1958 earthquake and is eliminated from the data set. Ketchikan is far from Glacier Bay and we discard its emergence data because it would constrain the ice model very little.

The 28 emergence rate values were multiplied by 50 years to get the net change in sea-level between the years 19 to and 1960. For some data this means extending the rate beyond the range of observation, but for others, it represents a time interval less than the full data available. The positions of these 28 data points are illustrated in Figure 1.

Fig. 1. Positions of the 28 data points used in this study. The numbering differs from that of Reference Hicks and ShofnosHicks and Shofnos (1965).

Errors in the data arise from (1) meteorological and oceanographic effects, (2) differences in length of record (19-73 years), and (3) time-dependent rates of emergence. For those stations that have a continuous record, Hicks and Shofnos report standard errors for the rate of emergence. These standard errors, when multiplied by 50 years, give the standard deviations of the corresponding data. For the majority of the data, which are discontinuous and represent just two observations, the above method is inapplicable. To partially eliminate the meteorological and oceanographic effects, Hicks and Shofnos adjusted the discontinuous record by the amount that the comparable continuous record at Ketchikan differed from the mean value at Ketchikan. This is a standard procedure for comparing sea-level observations of different lengths and at different times at a single station (Reference Hicks and ShofnosHicks and Shofnos, 1965). Changes in the rate of emergence cannot be detected unless the series is continuous. The errors for the discontinuous records must be inferred from the standard deviations of the continuous records. Figure 2 shows an attempt to describe the standard deviation as a function of total emergence with a linear regression equation and we hope that this equation will provide a useful estimate of the standard deviation of those data that do not have a continuous record. The fit

(1)

where y is the standard deviation in meters of the emergence datum and x is the total emergence in meters in 50 years, gives a correlation coefficient of 0.87 (significance level = 90% with a standard error of the estimate of 1.9 cm. We assume Equation (1) also models the errors present in the discontinuous record. Because no attempt was made to eliminate the meteorological and oceanographic effects from the continuous records, as was done for the discontinuous records, the resulting error estimates determined from the continuous records are probably greater than the actual errors. Table I lists the data values and the calculated errors.

Fig. 2. The relationship between emergence and error. The most rapidly rising data positions also have the greatest error in measurement.

Relative sea-level change on an elastic earth—the solution to the forward problem

The formulation for the inverse calculation requires that we know how a small change in the ice load model effects a corresponding change in relative sea-level. This problem is treated in detail by Reference Farrell and ClarkFarrell and Clark (1976) and is briefly reviewed below.

When a mass is placed on the Earth, it changes the gravitational equipotential surface (the geoid) and it deforms the solid surface of the Earth. The observed change in relative sea-level is the change in the distance between the ocean surface, defined by the geoid, and the ocean floor. Reference FarrellFarrell (1973) has calculated the change in the potential at the undeformed Earth's surface anywhere on a spherically symmetric, elastic, self-gravitating Earth model due to a surficial point load. The potential change is a result of the mass itself perturbing the potential field, and the density changes within the Earth, resulting from the distortion of the loaded Earth. Farrell has also calculated the radial displacement of the Earth's surface so that the change in the potential caused by the change in elevation can be determined. Adding all of these effects gives the potential perturbation anywhere on the deformed surface of the Earth caused by a point load. This potential perturbation Green function G’ is shown in Figure 3 as a function of angular distance.

Table I. Relative sea-level data from the vicinity of Glacier Bay, Alaska. The total emergence in 50 years was calculated from emergence rates given by Reference Hicks and ShofnosHicks and Shofnos (1965). Equation (I) determined the standard deviations

Fig. 3. The normalized potential Green function used in this study. The response of an elastic Earth to a one kilogram point load is shown as a function of distance from the load. We used the Green function represented by the solid line indicating the Earth's response to a load on the ocean floor. The dashed line is the Green function for a Gutenberg-Bullen elastic Earth. If the material under the Glacier Bay glaciers is not oceanic in nature, then the Earth's elastic response to loading is even greater than we have assumed. Our results, therefore, should be conservative. Both Green functions have been normalized by the factor aθ X 1012 where a is the radius of the Earth in meters (6.371 X 106 m), and θ is the angular distance from the point load in radians. Because of the normalization by θ, the Green function varies by over five orders of magnitude in this figure.

To determine the potential perturbation anywhere on the Earth from a distributed load, consider the load to be a collection of point loads and add the effect of each. If L(r’) is the point load on the surface of the Earth at the position represented by the vector r', so that the function L represents a surface density of the load, then the potential perturbation ψ at r is

where the notation (rr') represents the angle between position vectors r and r'; dΩ is an element of area; and the integral is evaluated over the entire areal extent of the load.

Reference Farrell and ClarkFarrell and Clark (1976) show that, for sea-level to remain on an equipotential surface, the radial increase in sea-level s’ is equal to ψ/g, with g the acceleration due to gravity. The change in sea-level shape s’ is not constant everywhere and to insure mass is conserved, this expression must be corrected by a constant amount K e.

For the case where ice melts and the melt water flows into the ocean, the load L is composed of an ice load L I and a water load L w. We can consider this load as

where pI and pw are the densities of ice and water respectively and I and s’ are the changes in the thickness of the ice and ocean.

Therefore, when the constant ocean-wide average rise (eustatic rise) K E is included to account for melt water flowing into the ocean, the sea-level change is the solution of an integral equation:

(2)

This expression for sea-level change can be greatly simplified because in this study we consider only sea-level changes close to the melting glaciers. Ketchikan, which is almost 450 km from Glacier Bay, has experienced very little change in relative sea-level and the Green function (Fig. 3) also suggests that the effect of a point load upon sea-level at such a great distance is approximately two orders of magnitude less than the change expected only to km from the load. To be conservative, we assume that sea-level is affected by the water load that is as distant as 1 000 km from the data positions. Even then the water load is two orders of magnitude less than the ice load in Equation (2), and so we are justified in ignoring the first term accounting for the water load.

The third term, K E, which corrects Equation (2) for melt water increasing the volume of the oceans, is less than 1.5 cm for the ice melting considered here. Our experience, from numerous forward calculations, is that the last term, K e, is usually much less than K E, so the constant correction terms combined are less than about 3 cm. Table I indicates that this 3 cm correction is, in general, less than 10% of the observed values and is less than the standard deviation of the measurement. Because the values of these correction terms depend upon the ice model, it is convenient to ignore these terms also at this time. This assumption introduces a small systematic error which is considered later.

When we invoke the above assumptions, which are valid only for regions close to the melting glaciers, the predicted emergence caused by a change in glacier thickness is

(3)

To find I(r'), one method is to make a guess of I(r') and to determine how closely s’ fits the observed emergence, d’. If the fit is poor, then I(r') is changed so that the fit (hopefully) improves. Such methods are inefficient so we now show a systematic way to determine ice thickness I(r') from sea-level observations, d'(r).

The formulation of the inverse problem

A consequence of the Green-function method and the above assumptions is that the forward problem is linear, because G’ does not depend upon I. Hence the inverse problem is linear also. Most geophysical inverse problems are non-linear, requiring an initial guess of a model that is close to the correct model, but the exact linearity of our problem indicates that we need no a priori model.

For a numerical solution of Equation (3), it is convenient to rewrite this equation in matrix notation. If E is the total loaded area, it is assumed there exist m disjoint regions of E, Ei (i = I, …, m) centered at ri respectively, such that I takes a distinct constant value I(ri ) on Ei . The sea-level change at position r j caused by the uniform load on element Ei is

(4)

This expression is rewritten as

where mi is the mass of ice that melts (negative for accretion) over area Ei , and

is the effect of a one kilogram mass, uniformly distributed over Ei , upon sea-level at position r j . The total sea-level change at r j is

In matrix notation, Equation (3) now has the approximate form

where s’ is a column vector with n elements, m is a column vector with m elements, and A’ is a matrix with n rows and m columns (n X m). If the data form a vector d’, then one of our goals is to find an ice model m such that

(5)

If m = n and A’ is non-singular, there is a unique model m, but errors in the data may cause this model to be very dissimilar from the real glacier change. If n > m the problem is over-determined, with more equations (data) than unknowns, and a least-squares solution exists. When m > n, the problem is under-determined with fewer equations than unknowns, but we can still infer something about the unknowns from the limited data.

The positions and sizes of the 15 ice loads used in this study are shown in Figure 4. All loaded areas are here assumed equal but this is not a necessary assumption. They include existing glaciers and, in particular, regions where ice changes have occurred within the past century (Reference BohnBohn, 1967, p. 107). >The disc shape is used for convenience, but more irregular shapes may also be employed for greater realism. The number of data points (28) exceeds the number of ice model points (15), making this an over-determined problem.

Each datum, d′i , has a different standard deviation and should be transformed so that the variance of every new datum is unity (Reference WigginsWiggins, 1972). This will insure that the model depends more on accurate data than on inaccurate data and will simplify the interpretation of the model at a later stage. Let W be a diagonal matrix (n X n) where an element of the diagonal, Wii , is the standard deviation of the datum, d′i , calculated from the linear regression Equation (1). To standardize the data multiply Equation (5) by the inverse of W, and let A = W−1A' and d = W−1d', so that in the standardized coordinate system Equation (5) becomes

and every datum has unit variance.

We want to know the nature of m, but before proceeding a brief review of eigenanalysis is worthwhile and a definition of terms is necessary.

Fig. 4. Positions and sizes of model glacier loads relative to present glacier extent. We assume that each loaded area (radius = 10km) is uniformly loaded. Reference FieldField (1947) claims that the area covered by glaciers near Muir Inlet was reduced by about 35% during the approximate time span considered here. The model locations, therefore, reflect the belief that the glaciated area was much more extensive at the turn of the century than at present.

The decomposition theorem and the natural inverse

Consider the “shifted eigenvalue problem” of Lanczos (1961, p. 117) for any real (n × m) matrix A;

where the superscript t indicates matrix transpose. This system of equations can be solved for v, u, and λ because substitution yields the uncoupled eigenvalue problems

and

which can be solved using well-known methods. For any matrix A there will exist p non-zero eigenvalues λi where p is less than or equal to the minimum of n and m. Associated with each non-zero eigenvalue λi there exists a “model eigenvector” v i and a “data eigenvector” u i . The matrix Λ is defined as the (p X p) diagonal matrix with Λii = λ i and ordered such that Λ ii Λ jj (i j). Define V as an (m X p) matrix whose jth column is the model eigenvector v j with eigenvalue Λ jj . The (n X p) matrix whose jth column is the data eigenvector u j is also associated with Λ jj , and is defined as U.

It can be shown that UtU = I and VtV = I where I is the identity matrix. Both UU t and VVt , however, equal the identity matrix only when n = m = p.

The “natural inverse” B of the matrix A is defined by Lanczos (1961, p. 124) as

and has the property that

and

For the case where m = n = p, AB = I and BA = I, and B is the conventional inverse of A.

Model construction

To find an ice model m that fits the data in the least-squares sense, minimize

The least-squares solution is

and provides an estimate of the real ice-mass change m. Furthermore, this least-squares solution is identical to that found by using the natural inverse of A,

(6)

The model estimate found in this manner has absurdly large changes in ice thickness. It has over 300 km of ice melting very close to a region where ice accumulation was of the same magnitude. If the data were perfectly accurate, this solution would be the most likely solution. However, the data have errors, so we should relax the least-squares criteria in the hope of finding a physically plausible ice model. We do this by minimizing εs2 subject to the constraint that the variation in the model, mtm, is equal to some arbitrary scalar y. As y decreases the model variation decreases, giving a more plausible model. However, the error of fit increases, but if this error is within the errors of the data, then the more plausible model with reduced variation is just as likely as the model found from the standard least-squares procedure.

Using the method of Lagrange multipliers, define a function F such that

and α is the Lagrange multiplier. To find the optimum values of F, differentiate with respect to the ice model m and set the result equal to zero:

(7)

This expression can be solved for , the model estimate. As the Lagrange multiplier α is a function of y, we are at liberty to vary α and calculate mtm to determine y. As a increases, the variation in the model decreases, but the fit to the data becomes worse, as Figure 5 indicates. The variation in the model, here given as the root-mean-square value of change in ice thickness,

is plotted against the root-mean-square error in fitting the non-standardized data

for each value of α in Figure 6. We would like both the model variation (I RMS) and the error of fit (d'RMS) to be small, but Figure 6 shows that as I RMS is reduced, dRMS increases. Figure 6 is therefore known as a “trade-off curve”, because we must sacrifice some of our ability to fit the data if the model variation is to be physically plausible. Such trade-off curves are an important and useful feature of inverse calculations because they indicate how an improvement in knowledge of one aspect of the model may, simultaneously, cause increased difficulties in the interpretation of the model. Through careful considerations a preferred position on the trade-off curve, usually close to the “knee” of the curve, is selected. Fortunately, for this study the model variation can be reduced considerably while the error in fitting the data barely increases. However, when I RMS is 1 200 m, and d'RMS is 9.6 cm, the error of fit increases disproportionately to the change in model variation. The ice model estimate at this point is termed the “best” model because it is physically plausible with a reasonable error of fit. This model is not exactly at the “knee” of the curve because of later considerations involving the accuracy of . For the best model the ice thickness change at each ice point is given in Table II, and Figure 7 indicates the close agreement between the calculated change in relative sea-level and the observations. There seems to be a systematic error, however, because the predicted emergence is too high for points distant from the ice and too low for points close to the ice. The initial assumption to disregard the viscous response of the Earth may be the cause of this slight systematic error.

Fig. 5. Model variation (IRMS) and error of fit (d′RMS) plotted as functions of the Lagrange multiplier α. The standard least-squares solution (log10 α = —37) has a root-mean-square error in fitting the data of only 4.4 an, but the model variation is 350 km. Neighboring model glacier points therefore have predicted thickness changes on the order of 300 km which is physically impossible. As log10 α increases, the model variation dramatically decreases and the error of fit increases.

Fig. 6. A trade-off curve showing the relationship between model variation (IRMS) and error of fit (d′RMS). A huge reduction in model variation is possible before the error of fit increases noticeably. Beyond the “knee” of the curve (IRMS ≃ 2 km; d′RMS 8 cm) the d′RMS increases considerably with little reduction in IRMS. The “best” model is the point on the curve indicated by the arrow (IRMS = 1.2 km; d′RMS = 9.6 cm).

Useful insight into the effect of varying a is gained by substituting the decomposed form of α into Equation (7) giving

Multiplying by Vt and making some rearrangement yields

From Equation (6) is VVtm so

When α = 0 this expression reduces to the standard least-squares solution for the model, but when α ≥ Λ2 jj , then the eigenvector associated with that eigenvalue is attenuated. As α increases, it suppresses eigenvectors associated with even larger eigenvalues and, in effect, reduces the rank p of the matrix to some integer q < p. For this study the best model is determined when all but eight eigenvalues are suppressed, and q equals eight.

Model Appraisal

The ice model constructed above is not the only model that fits the data. In fact, since the data have errors, there are an infinite number of model estimates that fit the data sufficiently well (see, for example, Figure 6). Furthermore, Reference Backus and GilbertBackus and Gilbert (1970) show that even for a fixed value of dRMS there are an infinite number of suitable models. We assume that the real ice model is among this infinite set of models and, although we cannot hope to single out this real model, we can, at least, determine something about the nature of the entire infinite set. From Equation (6) = Rm where RVVt and we refer to R as the resolution matrix. The vector m represents all of the suitable models, including the real ice model, and is only an estimate of these models. Reference Backus and GilbertBackus and Gilbert (1970) show that every suitable m has the same resolution matrix and model estimate, . We can, therefore, calculate quantities (R and ) that are common to all suitable models, including the real model. Through inspection of R, we can decide how useful is in estimating each of the m. If is a good estimate, then we know something about m and, in particular, the real ice model. If is a poor estimate, then we can say very little about m or the real ice model, but at least we can learn something about how the data limit our knowledge. This result also warns us that we should not have great faith in just because fits the data, and it assures us that we will recognize such a situation when it arises.

Table II. Predicted changes in ice thickness. These results for the best model (the model that represents the best balance between model variation I RMS and error of fit d′RMS) indicate the uniform change in ice thickness over each ice area, Ei . Positive values indicate thinning while negative values represent thickening

Fig. 7. Comparison of the observed data with the values predicted by the best model. The fit is quite good with a root-mean-square error of only 9.6 cm.

If R is the identity matrix, then n = m p, = m, and the real ice model is determined uniquely (for an error-free problem). The extent that R departs from I is an indication of how useful is in estimating m. If Rii is unity and all other elements in the ith row of R are zero, then = mi , and our estimate of mi is extremely good. But if all members of the ith row are equal, then the calculated value is only an estimate, that is shared by all m, where &mcirc;i is averaged over all ice loads and &mcirc;i is a poor estimate of mi . Although such an averaged estimate contains some information about the real ice load, there is no specific information about the suitable models at position r i .

Fig. 8. Ice-model resolution matrix for the best model (q = 8). Each connected bar graph represents a row in the resolution matrix. A peak along the diagonal indicating good resolution everywhere is desired. The degree of departure from this ideal situation indicates the usefulness of the corresponding model estimate in determining the real ice change. The sequence of ice points does not necessarily reflect their relative spatial positions. For example, ice load 2 does not lie between loads 1 and 3.

The resolution matrix for the best model (q = 8) is illustrated in Figure 8. Each series of connected bar graphs represents a row in the resolution matrix. A peak approaching unity along the diagonal is ideal, but this is not the case here. Only ice loads 2, 5, 6, 11, 12, 14, and 15 are well resolved. Loads 1 and 3 are averaged predominantly over each other. For example, &mcirc;i for ice point 3 is actually an estimate obtained from averaging the load that was on both points 1 and 3. The remaining ice loads are very poorly resolved and do not yield useful information about specific ice changes.

Estimate of model variance

We have calculated a model estimate and have indicated how well that model estimate approximates the real ice changes. One remaining question to ask is, “How much variance is expected in the model estimate due to errors in the data?” This is an important and necessary question to ask because may be plausible and well-resolved but with a standard error of the estimate that is enormous.

If x = a 1 y 1+a 2 y 2 then the variance of x is

From Equation (6) the model estimate is

so the variance of this estimate, caused by random errors in the data, is

and we consider only the diagonal terms of the matrix on the right. Because the standardized data have unit variance, Var (d) = I and the variance of the estimates are the diagonal terms of

or

(8)

We give the standard deviations for each estimated parameter of the best ice model in Figure 9 as error bars around mk . We also include the values of the diagonal terms in the resolution matrix. Only ice models with high resolution should be considered when interpreting the results.

In general, the ice thinned by about 1 200 m with a standard deviation of 300 m and so the error does not overwhelm . Ice load 12, however, has good resolution but a model estimate slightly smaller than the standard deviation (350 m±380 m) indicating that we should not attach great significance to this estimate.

Trade-off between resolution and variance

We can reduce the model variance by eliminating small eigenvalues in Equation (8) (reducing q), but this also affects the model construction, the error of fit, and the resolution matrix, because we are also obliged to reduce the number of eigenvalues and eigenvectors when calculating each of these.

Fig. 9. Glacier thickness change, standard deviation, and resolution for the best model (q = 8). The bars around each circle represent the uncertainties (one standard deviation) in the model estimate arising from errors in the data. Positive values indicate glacial thinning. The squares represent the diagonal values in the resolution matrix for each corresponding ice point.

We have already shown how the variation in the model and the error of fit are linked. Now we will consider the trade-off between model resolution and model variance.

A useful indication of the total resolution of a model is the root-mean-square value of the diagonals of R,

When R RMS = 1 there is perfect resolution and as RRMS decreases, the resolution becomes poorer. For a general estimate of the variance, consider the root-mean-square value of the model standard deviations of ice thickness,

where .

Figure so indicates that as we reduce the number of non-zero eigenvalues (q), the standard deviation of the estimate decreases but the resolution becomes poorer. The least-squares solution retaining all eigenvalues (q = 15) has excellent resolution, but has a standard deviation in ice thickness of about 360 km. The model is known specifically at a single ice point, but it is also highly inaccurate. At the other extreme, when very few eigenvalues are retained, the model is very accurate but reveals very little about ice changes at any one position. Either extreme gives results that are difficult to interpret, so we seek a compromise and again consider a trade-off curve. This trade-off curve (Fig. 11), determined from Figure 10, illustrates how resolution and accuracy are related. The best model is near the knee of the curve and therefore has a useful balance of resolution and accuracy (R RMS = 0.68; σRMS = 280 m).

Fig. 10. The effect of reduction in the number of eigenvectors (reduction in q) upon the model standard deviation (σRMS) and the model resolution (RRMS). For the standard least-squares solution (q = 15) the resolution is excellent (RRMS = 1.0) but the standard deviation of change in ice thickness is huge (σRMS = 360 km). As we reduce the number of eigenvectors, the standard deviations become smaller but the resolution becomes poorer.

Data importance

In calculating the model estimate, the least-squares procedure does not give equal weight to all of the data because of redundancies, errors, or insensitivity to ice changes because of great distances from the ice. An estimate of the relative importance of each datum is useful in determining how the data constrain the model. Furthermore, if the importance is not a function of the actual value of the datum, the importance can be used in estimating priorities for data collection in the field because we then know which data will be of greatest benefit.

Fig. 11. Trade-off curve showing the relationship of error in the model (σRMS) to model resolution (RRMS). The model error is reduced considerably with little sacrifice of resolution. The best model, indicated by the arrow, lies at the knee of the curve (σRMS 0.28 km; RRMS = 0.68).

The model estimate gives only an estimate of the data , where

(9)

but is defined (Equation (6)) as the product of the natural inverse and the standardized observations,

Substitution into Equation (9) gives

where D is an (n X n) matrix defined equal to UUt , so the data estimate is actually an averaged value of the observations. Reference Minster, Minster, Jordan Molnar and HainesMinster and others (1974) suggest that the diagonal terms of D be termed the “importance values” of the data. They show that the importance value is low if the accuracy of the corresponding datum is poor, if the datum is redundant, or if it is not particularly sensitive to changes in the model. One final benefit is that D does not depend upon the values of the data, only their accuracy and positions relative to the model points (in this linear problem). The sum of the importance values always equals m, the number of model points (Reference Minster, Minster, Jordan Molnar and HainesMinster and others, 1974). Because the column vectors in D are orthogonal, a rotation of D exists such that D becomes a diagonal matrix with ones as the first m diagonal terms and zeros elsewhere indicating that the total information contained in the data is also m. For our problem m = 15 so the percentage of information for datum i is (Dii/m) 100. Table III lists the importance values and cumulative percentage of the information contained in the data. We see that over 50% of this information is contained in only 8 of the 28 data points. These are all points that are near an ice load.

Table III. Relative importance among the data. The most useful data for constraining our model are those with importance values near unity. The cumulative percentage of information contained in the data shows that over 50% of the useful information is from only eight of the data points

Only 8 of the original 15 model eigenvectors are used to construct the best model, so we use only 8 data eigenvectors in calculating (U is an n X q matrix). This modified D matrix is illustrated in Figure 12 showing that individual data points near ice loads constrain the model estimate much more than distant points, whose data estimates are actually an average of all observations. Fewer data points at great distances would be adequate, and more data near the ice loads would improve the information contained in the data considerably.

Discussion

In selecting the best ice model we considered (1) error of fit to the data, (2) model resolution, (3) model accuracy, and (4) root-mean-square variation in the model. The standard least-squares model has excellent resolution but very poor accuracy and so is difficult to interpret physically. Errors in the data allow the relaxation of the least-squares criteria so that the variation of the ice model is reduced at the expense of greater error in fitting the data. This reduction of the variation in the ice model also causes the resolution to become poorer but improves the accuracy of the model. In selecting the best model, the ice model variation, the error of fit, the resolution, and the model accuracy must all be acceptable, because, if any one of these is unacceptable, then the resulting model is difficult to interpret. Figure 13 shows the results for the best model (IRMS = 1 200 m; d'RMS = 9.6 cm; R RMS 0.68; σRMS = 280 m) which, we believe, is a good compromise among the above criteria.

Fig. 12. Data importance matrix for the best model (q = 8). Each connected bar graph represents a row in the modified data importance matrix. A peak approaching unity on the diagonal indicates that the predicted value closely approximates the observed datum. The degree of departure from this indicates how the observed data are averaged to yield the predicted value. The sequence of data points does not necessarily reflect their relative spatial positions.

The predicted ice thinning west of Muir Inlet is 1 000 m while ice to the east thins about 1 500 m with a standard error of 350 m and the Geikie Glacier (m 5) has a predicted thinning of 1 000 m (σ = 480 m). Although a glacier advance is predicted for the glaciers surrounding Tarr and Hopkins Inlet (m 7, m s, m 9, m 10), the resolution for this region is very poor. The Brady Glacier (m 6) and the Juneau Ice Field (m 15) both have enormous predicted thinning (2 260 m and> 2 820 m respectively) but the area over which this thinning was supposedly occurring is an underestimate. Increasing the assumed ice area to more realistic values while holding the mass of melted ice constant decreases the thinning to about 1 500 m.

The initial assumption to disregard the time-dependent effect of viscous relaxation can now be tested by an independent data set—the historical observations of glacial thinning. Only with very good estimates of the actual amount of ice thinning between 1910 and 1960 can we make a reliable estimate of the proportion of emergence caused by the instantaneous effects. The maximum observed ice thinning in Muir Inlet between 1880 and 1946 is 790 m (Reference FieldField, 1947) with a glacial retreat of 15 km. In general, the amount of glacial melting was about 50% of this maximum value. Assuming that the melting rates reported by Reference FieldField (1947) for varying time spans are applicable to our entire 50-year period, the Adams Glacier thinned by 4.00 m while the Casement Glacier as well as a stagnant ice stream east of Muir Inlet thinned by 370 m. McBride Glacier lowered by only 250 m as did the Plateau and Burroughs Glaciers, but Muir Glacier thinned by 630 m, and, between 1941 and 1946, extremely rapid melting caused the glacier surface to lower at the incredible rate of 30 m/year. The Geikie Glacier has retreated about 8 km (Reference BohnBohn, 1967, p. 107) but we know of no quantitative observation of ice thinning during that time. Predicted thinning values range from about 1 000 to 1 500 m while observed values lie between 250 and 630 m suggesting that between ¼ and ½ of the observed emergence can be explained here. Therefore a significant proportion of the observed emergence occurs simultaneously with the melting of the glaciers.

Fig. 13. Summary of results Jar the best model. Glacier thickness changes with low resolution are not well determined. In general the predicted glacier thickness changes are larger, by a factor of 3 or 4, than the observed changes.

The early assumption to ignore the constant terms, K E and K e, in Equation (2) has the effect of increasing all observed emergence values by no more than 3 cm. Table III shows that, of the data points containing 50% of the information used in constructing the model, the mean emergence is 110 cm. Ignoring these correction terms, therefore, introduces approximately a 3% error which increases the predicted ice thinning by 3%. As this is a constant correction applied to all data, none of the other analyses is affected, and the standard error of the model, which is usually about 30% of the predicted ice thinning, overshadows this slight error.

Conclusions

Sea-level near glaciers is certainly quite sensitive to the mass balance of these glaciers and the emergence that occurs near the melting ice results, to a large extent, from the instantaneous effects of elastic uplift and reduced gravitational attraction of water by the melted ice. Viscous relaxation is of lesser importance when alpine glaciers, rather than continental ice sheets, are considered, perhaps because the lithosphere supports much of the load and because the characteristic rapid fluctuations of these glaciers would not permit isostatic equilibrium to be attained.

These results indicate that great care must be used when interpreting the viscosity of the Earth in this region, because only about two-thirds of the observed emergence is caused by viscous relaxation. When the immediate relative sea-level effects are not included, estimates of the Earth's viscosity (e.g. Reference CrittendenCrittenden, 1967), will be too low. The rapid emergence of Greenland shore-lines 9 000 years ago may also be due to these same immediate effects that are observed today at Glacier Bay (Reference ClarkClark, 1976). In this seismically active zone, it is tempting to interpret geodetic movements as possible earthquake precursors (Reference WyssWyss, 1976), but clearly the glacial effects upon these measurements must be removed before they can be used to predict earthquakes.

This inverse calculation is an efficient and useful method for reconstructing ice-load fluctuations that are consistent with observed sea-level data. For this example, we considered only time-independent responses, but potential perturbation Green functions for a viscoelastic Earth model exist (Reference PeltierPeltier, 1974), and we have already solved the forward problem for the time-dependent case (Reference Farrell and ClarkFarrell and Clark, 1976). The inverse calculation for the reconstruction of time-dependent ice loads, including the effect of viscous relaxation, is therefore possible. For data distant from the ice, the correction terms become important, and Equation (2) must be used instead of Equation (3). When we have overcome these complications, it will be possible to use existing world-wide sea-level data for the past 20 000 years to reconstruct the late-Wisconsin deglacial history. Such an ambitious inverse calculation requires that we know the viscosity profile of the Earth. This is currently in dispute (Reference McConnellMcConnell, 1968; Reference WalcottWalcott, 1973; Reference PeltierPeltier, 1974; Reference SmithSmith, 1974; Reference CathlesCathles, 1975), but Reference PeltierPeltier (1976) is attempting to solve this problem with an inverse calculation.

An inverse calculation is certainly a useful and efficient means of finding a model that fits the data. The real strength of the inverse calculation, however, is that it gives a means of interpreting the constructed model on the basis of the accuracy and resolution of the model and so we know how reliable the model is. With this assurance we are not worried about using inverse calculations for the immense task of reconstructing world-wide glacial melting, because the resulting model can be appraised properly. We also emphasize that the ability of the model to fit the data is only one criterion, and this criterion should not be relied upon alone, because, as we have seen, the model that best fits the data is also highly inaccurate.

We fixed the positions and sizes of the model ice points subjectively, and changes here will slightly affect the solution and its interpretation. For this over-determined case the number of ice points is limited by the number of data points, but it is possible to solve the underdetermined problem, where there may be more ice points than data points. The under-determined problem is solved using a method similar to that used here, and the extent of ice can then be more faithfully represented. For the reconstruction of the late-Wisconsin deglaciation, the inverse problem must surely be under-determined because of the paucity of complete sea-level curves throughout the world and the huge extent of the late-Wisconsin ice sheets.

Gathering field data of relative sea-level changes is both costly and time consuming. When planning field operations, we should use inverse calculations to determine the data importance of the proposed study sites and use this estimate to guide our priorities in the field.

This study only considered glacier thinning from 1910 to 1960, but glacier observations have been much more frequent and more accurate since that time. It would therefore be very useful to re-occupy the benchmarks and to analyze the emergence during the past 15 years with the method outlined above. Such a study would give a better indication of the proportion of emergence that is caused by viscous relaxation of the mantle. Establishing new benchmarks in suitable places will improve the resolution of certain ice points and greatly extend our knowledge of the processes relating glacier mass balance and sea-level fluctuations. Sea-level data from Tarr and Hopkins Inlets will be particularly useful.

Acknowledgements

I am indebted to Martin L. Smith and E. Robert Engdahl for introducing me to inversion theory and for their encouragement and help throughout all aspects of this work. Without their support I would not have completed this project. William E. Farrell and Gifford H. Miller supplied valuable criticism of an earlier version of this work and numerous discussions with Keith Echelmeyer honed my understanding of inversion theory. William E. Farrell and W. Richard Peltier kindly supplied the necessary Green functions and the University of Colorado Computing Center furnished all of the computer time. My support was from NSF Grants DES74-13047-A01, EAR74-13047-A02, and NOAA Contract Number 03-5022-94.

References

Aki, K., and others. 1977. Determination of the three-dimensional seismic structure of the lithosphere, [by]Google Scholar
Aki, K., Christoffersson, A. [and] Husebye, E.. Journal of Geophysical Research, Vol. 82, No. 2, p. 27796.Google Scholar
Backus, G., and Gilbert, F. 1967. Numerical applications of a formalism for geophysical inverse problems. Geophysical Journal of the Royal Astronomical Society, Vol. 13, Nos. 13, p. 24776.Google Scholar
Backus, G., and Gilbert, F. 1968. The resolving power of gross Earth data. Geophysical journal of the Royal Astronomical Society, Vol. 16, No. 2, p. 169205.Google Scholar
Backus, G., and Gilbert, F. 1970. Uniqueness in the inversion of inaccurate gross Earth data. Philosophical Transactions of the Royal Society of London, Ser. A, Vol. 266, No. 1173, p. 12392.Google Scholar
Bohn, D. 1967. Glacier Bay: the land of the silence. New York, Ballantin Books.Google Scholar
Cathles, L. M. 1975. The viscosity of the Earth's mantle. Princeton, N. J., Princeton University Press.Google Scholar
Clark, J. A. 1976. Greenland's rapid postglacial emergence: a result of ice-water gravitational attraction. Geology, Vol. 4, No. 5, p. 31012.Google Scholar
Crittenden, M. D. 1967. Viscosity and finite strength of the mantle as determined from water and ice loads. Geophysical Journal of the Royal Astronomical Society, Vol. 14, Nos. 14, p. 26179.Google Scholar
Engdahl, E. R., and Johnson, L. E. 1974. Differential PcP travel times and the radius of the core. Geophysical Journal of the Royal Astronomical Society, Vol. 39,P. 43556.Google Scholar
Farrell, W. E. 1973. Earth tides, ocean tides and tidal loading. Philosophical Transactions of the Royal Society of London, Ser. A, Vol. 274, P. 25359.Google Scholar
Farrell, W. E., and Clark, J. A. 1976. On postglacial sea level. Geophysical Journal of the Royal Astronomical Society, Vol. 46, No. 3, p. 64767.Google Scholar
Field, W. O. 1947. Glacier recession in Muir Inlet, Glacier Bay, Alaska. Geographical Review, Vol. 37, No. 3, p. 36999.CrossRefGoogle Scholar
Hicks, S. D., and Shofnos, W. 1965. The determination of land emergence from sea level observations in southeast Alaska. Journal of Geophysical Research, Vol. 70, No.14, p. 331520.Google Scholar
Jackson, D. D. 1972. Interpretation of inaccurate, insufficient and inconsistent data. Geophysical Journal of the Royal Astronomical Society, Vol. 28, p. 97110.Google Scholar
Lanczos, C. 1961. Linear differential operators. New York, D. Van Nostrand Co.Google Scholar
McConnell, R. K. 1968. Viscosity of the mantle from relaxation time spectra of isostatic adjustment. Journal of Geophysical Research, Vol. 73, No. 22, p. 7089105.Google Scholar
Minster, J. B., and others. 1974. Numerical modelling of instantaneous plate tectonics, [by] Minster, J. B., Jordan Molnar, T. H. P. and Haines, E.. Geophysical Journal of the Royal Astronomical Society, Vol. 36, p. 54176.Google Scholar
Parker, R. L. 1970. The inverse problem of electrical conductivity in the mantle. Geophysical Journal of the Royal Astronomical Society, Vol. 22, p. 12 12138.Google Scholar
Parker, R. L. 1977. Understanding inverse theory. Annual Reviews of Earth and Planetary Sciences, Vol. 5, p. 3564.Google Scholar
Peltier, W. R. 1974. The impulse response of a Maxwell Earth. Review of Geophysics and Space Physics, Vol. 12, No. 4, p. 64969.Google Scholar
Peltier, W. R. 1976. Glacial isostatic adjustment II: the inverse problem. Geophysical Journal of the Royal Astronomical Society, Vol. 46, No. 3, p. 669705.Google Scholar
Smith, P. J. 1974. Changing views of mantle viscosity. Nature, Vol. 252, No. 5479, p. 99100.CrossRefGoogle Scholar
Walcott, R. A. 1973. Structure of the Earth from glacio-isostatic rebound. Annual Review of Earth and Planetary Sciences, Vol. 1, p. 1537.Google Scholar
Wiggins, R. A. 1972. The general linear inverse problem; implication of surface waves and free oscillations. Review of Geophysics and Space Physics, Vol. 10, No. 1, p. 25185.Google Scholar
Wyss, M. 1976. Local changes of sea level before large earthquakes in South America. Bulletin of the Seismological Society of America, Vol. 66, No. 3, P. 90314.Google Scholar
Figure 0

Fig. 1. Positions of the 28 data points used in this study. The numbering differs from that of Hicks and Shofnos (1965).

Figure 1

Fig. 2. The relationship between emergence and error. The most rapidly rising data positions also have the greatest error in measurement.

Figure 2

Table I. Relative sea-level data from the vicinity of Glacier Bay, Alaska. The total emergence in 50 years was calculated from emergence rates given by Hicks and Shofnos (1965). Equation (I) determined the standard deviations

Figure 3

Fig. 3. The normalized potential Green function used in this study. The response of an elastic Earth to a one kilogram point load is shown as a function of distance from the load. We used the Green function represented by the solid line indicating the Earth's response to a load on the ocean floor. The dashed line is the Green function for a Gutenberg-Bullen elastic Earth. If the material under the Glacier Bay glaciers is not oceanic in nature, then the Earth's elastic response to loading is even greater than we have assumed. Our results, therefore, should be conservative. Both Green functions have been normalized by the factor aθ X 1012 where a is the radius of the Earth in meters (6.371 X 106 m), and θ is the angular distance from the point load in radians. Because of the normalization by θ, the Green function varies by over five orders of magnitude in this figure.

Figure 4

Fig. 4. Positions and sizes of model glacier loads relative to present glacier extent. We assume that each loaded area (radius = 10km) is uniformly loaded. Field (1947) claims that the area covered by glaciers near Muir Inlet was reduced by about 35% during the approximate time span considered here. The model locations, therefore, reflect the belief that the glaciated area was much more extensive at the turn of the century than at present.

Figure 5

Fig. 5. Model variation (IRMS) and error of fit (d′RMS) plotted as functions of the Lagrange multiplier α. The standard least-squares solution (log10 α = —37) has a root-mean-square error in fitting the data of only 4.4 an, but the model variation is 350 km. Neighboring model glacier points therefore have predicted thickness changes on the order of 300 km which is physically impossible. As log10 α increases, the model variation dramatically decreases and the error of fit increases.

Figure 6

Fig. 6. A trade-off curve showing the relationship between model variation (IRMS) and error of fit (d′RMS). A huge reduction in model variation is possible before the error of fit increases noticeably. Beyond the “knee” of the curve (IRMS ≃ 2 km; d′RMS 8 cm) the d′RMS increases considerably with little reduction in IRMS. The “best” model is the point on the curve indicated by the arrow (IRMS = 1.2 km; d′RMS = 9.6 cm).

Figure 7

Table II. Predicted changes in ice thickness. These results for the best model (the model that represents the best balance between model variation IRMS and error of fit d′RMS) indicate the uniform change in ice thickness over each ice area, Ei. Positive values indicate thinning while negative values represent thickening

Figure 8

Fig. 7. Comparison of the observed data with the values predicted by the best model. The fit is quite good with a root-mean-square error of only 9.6 cm.

Figure 9

Fig. 8. Ice-model resolution matrix for the best model (q = 8). Each connected bar graph represents a row in the resolution matrix. A peak along the diagonal indicating good resolution everywhere is desired. The degree of departure from this ideal situation indicates the usefulness of the corresponding model estimate in determining the real ice change. The sequence of ice points does not necessarily reflect their relative spatial positions. For example, ice load 2 does not lie between loads 1 and 3.

Figure 10

Fig. 9. Glacier thickness change, standard deviation, and resolution for the best model (q = 8). The bars around each circle represent the uncertainties (one standard deviation) in the model estimate arising from errors in the data. Positive values indicate glacial thinning. The squares represent the diagonal values in the resolution matrix for each corresponding ice point.

Figure 11

Fig. 10. The effect of reduction in the number of eigenvectors (reduction in q) upon the model standard deviation (σRMS) and the model resolution (RRMS). For the standard least-squares solution (q = 15) the resolution is excellent (RRMS = 1.0) but the standard deviation of change in ice thickness is huge (σRMS = 360 km). As we reduce the number of eigenvectors, the standard deviations become smaller but the resolution becomes poorer.

Figure 12

Fig. 11. Trade-off curve showing the relationship of error in the model (σRMS) to model resolution (RRMS). The model error is reduced considerably with little sacrifice of resolution. The best model, indicated by the arrow, lies at the knee of the curve (σRMS 0.28 km; RRMS = 0.68).

Figure 13

Table III. Relative importance among the data. The most useful data for constraining our model are those with importance values near unity. The cumulative percentage of information contained in the data shows that over 50% of the useful information is from only eight of the data points

Figure 14

Fig. 12. Data importance matrix for the best model (q = 8). Each connected bar graph represents a row in the modified data importance matrix. A peak approaching unity on the diagonal indicates that the predicted value closely approximates the observed datum. The degree of departure from this indicates how the observed data are averaged to yield the predicted value. The sequence of data points does not necessarily reflect their relative spatial positions.

Figure 15

Fig. 13. Summary of results Jar the best model. Glacier thickness changes with low resolution are not well determined. In general the predicted glacier thickness changes are larger, by a factor of 3 or 4, than the observed changes.