Introduction
During the past century relative sea-level has fallen rapidly near Glacier Bay, Alaska. Although this rapid emergence of the coast line, exceeding 3 cm/year, is occurring in a region of tectonic activity, continuous tide-gauge records indicate that the emergence is not related to earthquakes (Reference Hicks and ShofnosHicks and Shofnos, 1965). Since the numerous glaciers near Glacier Bay have retreated very rapidly during this same time period, Reference Hicks and ShofnosHicks and Shofnos (1965) suggested glacial control. The rate of emergence is comparable to that observed in Hudson Bay, which is known to be a consequence of post-glacial isostatic uplift following the retreat of the Laurentide ice sheet, so they argued for a viscous relaxation mechanism beneath Glacier Bay also. Viscous flow of the mantle undoubtedly contributes to the rapid emergence, but we demonstrate here that instantaneous elastic uplift of the ocean floor and changes in the geoid, defining the ocean surface, are also important. The elastic properties of the Earth and the classical effects of gravity are understood much better than the Earth's viscous properties, so these instantaneous causes should be considered before time-dependent viscous causes.
Here we assume that the entire sea-level response results from these immediate effects, and we determine the amount and distribution of glacial thinning required to cause the observed sea-level change. Given a glacial history, we can calculate the resulting sea-level change everywhere (Reference Farrell and ClarkFarrell and Clark, 1976), but this calculation, termed the “forward calculation”, is not our goal. Rather, we want to determine the glacial history from the observed sea-level changes.
A general approach to inverse problems was first proposed by Reference Backus and GilbertBackus and Gilbert (1967, Reference Backus and Gilbert1968, Reference Backus and Gilbert1970) and their method has subsequently been used to study a variety of geophysical problems (e.g. Reference ParkerParker, 1970; Reference Engdahl and JohnsonEngdahl and Johnson, 1974; Reference Minster, Minster, Jordan Molnar and HainesMinster and others, 1974; Reference AkiAki and others, 1977; Reference PeltierPeltier, 1976). Our development closely follows an extension of this procedure discussed thoroughly by Reference JacksonJackson (1972), Reference WigginsWiggins (1972), and Reference ParkerParker (1977). To solve the inverse problem requires finding a glacial history that fits the relative sea-level data sufficiently well. If the physics of the problem is adequately described, with realistic assumptions, then it is reasonable to suppose that a glacial history exists for the model. We might discover the correct history by trial-and-error, that is, by subjectively varying the glacial history until the forward calculation predicts sea-level changes that are sufficiently close to the measured data. Or we might select glacial histories at random until we accidentally discover one that is suitable. Here we apply a more efficient and systematic approach that uses the observed emergence to construct a glacial history that is consistent with the data, but the success of this construction does not imply that the inverse problem is solved. Because of errors in the emergence data, the glacial history that best fits the data is not unique and this non-uniqueness is the main problem addressed by inverse theory. Other glacial histories might also predict emergences that lie within the errors of the data. Inverse theory provides an estimate of the degree of departure from uniqueness, and hence indicates how much we really know about the actual glacial history. Because of these uncertainties, we can only consider models of the glacial history. Also errors in the data will propagate to the ice model, and the inverse theory shows how to calculate these errors in the model. Finally, all data are not equally important in constraining the ice model. The use of inverse theory allows us to determine the relative importance of each datum, giving insight into the model and providing a means of determining the best localities for data collection when designing a field project.
Emergence data
Reference Hicks and ShofnosHicks and Shofnos (1965) reported the average rate of emergence at 31 localities surrounding Glacier Bay. Five of these emergence rates were obtained from continuous tide-gauge records, and the remaining ones were determined by re-occupying, in 1959 and 1960, benchmarks that were established between 1887 and 1940. All but three of these data are used here.
We discard emergence data at Yakutat because the continuous sea-level record indicates an abrupt rise of sea-level at the time of the 1958 Alaska earthquake (Richter magnitude 8). The benchmark in Bartlett Cove, near the Glacier Bay National Park Service headquarters, is on Quaternary alluvium and does not have a continuous record. It may have been disturbed during the 1958 earthquake and is eliminated from the data set. Ketchikan is far from Glacier Bay and we discard its emergence data because it would constrain the ice model very little.
The 28 emergence rate values were multiplied by 50 years to get the net change in sea-level between the years 19 to and 1960. For some data this means extending the rate beyond the range of observation, but for others, it represents a time interval less than the full data available. The positions of these 28 data points are illustrated in Figure 1.
Errors in the data arise from (1) meteorological and oceanographic effects, (2) differences in length of record (19-73 years), and (3) time-dependent rates of emergence. For those stations that have a continuous record, Hicks and Shofnos report standard errors for the rate of emergence. These standard errors, when multiplied by 50 years, give the standard deviations of the corresponding data. For the majority of the data, which are discontinuous and represent just two observations, the above method is inapplicable. To partially eliminate the meteorological and oceanographic effects, Hicks and Shofnos adjusted the discontinuous record by the amount that the comparable continuous record at Ketchikan differed from the mean value at Ketchikan. This is a standard procedure for comparing sea-level observations of different lengths and at different times at a single station (Reference Hicks and ShofnosHicks and Shofnos, 1965). Changes in the rate of emergence cannot be detected unless the series is continuous. The errors for the discontinuous records must be inferred from the standard deviations of the continuous records. Figure 2 shows an attempt to describe the standard deviation as a function of total emergence with a linear regression equation and we hope that this equation will provide a useful estimate of the standard deviation of those data that do not have a continuous record. The fit
where y is the standard deviation in meters of the emergence datum and x is the total emergence in meters in 50 years, gives a correlation coefficient of 0.87 (significance level = 90% with a standard error of the estimate of 1.9 cm. We assume Equation (1) also models the errors present in the discontinuous record. Because no attempt was made to eliminate the meteorological and oceanographic effects from the continuous records, as was done for the discontinuous records, the resulting error estimates determined from the continuous records are probably greater than the actual errors. Table I lists the data values and the calculated errors.
Relative sea-level change on an elastic earth—the solution to the forward problem
The formulation for the inverse calculation requires that we know how a small change in the ice load model effects a corresponding change in relative sea-level. This problem is treated in detail by Reference Farrell and ClarkFarrell and Clark (1976) and is briefly reviewed below.
When a mass is placed on the Earth, it changes the gravitational equipotential surface (the geoid) and it deforms the solid surface of the Earth. The observed change in relative sea-level is the change in the distance between the ocean surface, defined by the geoid, and the ocean floor. Reference FarrellFarrell (1973) has calculated the change in the potential at the undeformed Earth's surface anywhere on a spherically symmetric, elastic, self-gravitating Earth model due to a surficial point load. The potential change is a result of the mass itself perturbing the potential field, and the density changes within the Earth, resulting from the distortion of the loaded Earth. Farrell has also calculated the radial displacement of the Earth's surface so that the change in the potential caused by the change in elevation can be determined. Adding all of these effects gives the potential perturbation anywhere on the deformed surface of the Earth caused by a point load. This potential perturbation Green function G’ is shown in Figure 3 as a function of angular distance.
To determine the potential perturbation anywhere on the Earth from a distributed load, consider the load to be a collection of point loads and add the effect of each. If L(r’) is the point load on the surface of the Earth at the position represented by the vector r', so that the function L represents a surface density of the load, then the potential perturbation ψ at r is
where the notation (r—r') represents the angle between position vectors r and r'; dΩ is an element of area; and the integral is evaluated over the entire areal extent of the load.
Reference Farrell and ClarkFarrell and Clark (1976) show that, for sea-level to remain on an equipotential surface, the radial increase in sea-level s’ is equal to ψ/g, with g the acceleration due to gravity. The change in sea-level shape s’ is not constant everywhere and to insure mass is conserved, this expression must be corrected by a constant amount K e.
For the case where ice melts and the melt water flows into the ocean, the load L is composed of an ice load L I and a water load L w. We can consider this load as
where pI and pw are the densities of ice and water respectively and I and s’ are the changes in the thickness of the ice and ocean.
Therefore, when the constant ocean-wide average rise (eustatic rise) K E is included to account for melt water flowing into the ocean, the sea-level change is the solution of an integral equation:
This expression for sea-level change can be greatly simplified because in this study we consider only sea-level changes close to the melting glaciers. Ketchikan, which is almost 450 km from Glacier Bay, has experienced very little change in relative sea-level and the Green function (Fig. 3) also suggests that the effect of a point load upon sea-level at such a great distance is approximately two orders of magnitude less than the change expected only to km from the load. To be conservative, we assume that sea-level is affected by the water load that is as distant as 1 000 km from the data positions. Even then the water load is two orders of magnitude less than the ice load in Equation (2), and so we are justified in ignoring the first term accounting for the water load.
The third term, K E, which corrects Equation (2) for melt water increasing the volume of the oceans, is less than 1.5 cm for the ice melting considered here. Our experience, from numerous forward calculations, is that the last term, K e, is usually much less than K E, so the constant correction terms combined are less than about 3 cm. Table I indicates that this 3 cm correction is, in general, less than 10% of the observed values and is less than the standard deviation of the measurement. Because the values of these correction terms depend upon the ice model, it is convenient to ignore these terms also at this time. This assumption introduces a small systematic error which is considered later.
When we invoke the above assumptions, which are valid only for regions close to the melting glaciers, the predicted emergence caused by a change in glacier thickness is
To find I(r'), one method is to make a guess of I(r') and to determine how closely s’ fits the observed emergence, d’. If the fit is poor, then I(r') is changed so that the fit (hopefully) improves. Such methods are inefficient so we now show a systematic way to determine ice thickness I(r') from sea-level observations, d'(r).
The formulation of the inverse problem
A consequence of the Green-function method and the above assumptions is that the forward problem is linear, because G’ does not depend upon I. Hence the inverse problem is linear also. Most geophysical inverse problems are non-linear, requiring an initial guess of a model that is close to the correct model, but the exact linearity of our problem indicates that we need no a priori model.
For a numerical solution of Equation (3), it is convenient to rewrite this equation in matrix notation. If E is the total loaded area, it is assumed there exist m disjoint regions of E, Ei (i = I, …, m) centered at ri respectively, such that I takes a distinct constant value I(ri ) on Ei . The sea-level change at position r j caused by the uniform load on element Ei is
This expression is rewritten as
where mi is the mass of ice that melts (negative for accretion) over area Ei , and
is the effect of a one kilogram mass, uniformly distributed over Ei , upon sea-level at position r j . The total sea-level change at r j is
In matrix notation, Equation (3) now has the approximate form
where s’ is a column vector with n elements, m is a column vector with m elements, and A’ is a matrix with n rows and m columns (n X m). If the data form a vector d’, then one of our goals is to find an ice model m such that
If m = n and A’ is non-singular, there is a unique model m, but errors in the data may cause this model to be very dissimilar from the real glacier change. If n > m the problem is over-determined, with more equations (data) than unknowns, and a least-squares solution exists. When m > n, the problem is under-determined with fewer equations than unknowns, but we can still infer something about the unknowns from the limited data.
The positions and sizes of the 15 ice loads used in this study are shown in Figure 4. All loaded areas are here assumed equal but this is not a necessary assumption. They include existing glaciers and, in particular, regions where ice changes have occurred within the past century (Reference BohnBohn, 1967, p. 107). >The disc shape is used for convenience, but more irregular shapes may also be employed for greater realism. The number of data points (28) exceeds the number of ice model points (15), making this an over-determined problem.
Each datum, d′i , has a different standard deviation and should be transformed so that the variance of every new datum is unity (Reference WigginsWiggins, 1972). This will insure that the model depends more on accurate data than on inaccurate data and will simplify the interpretation of the model at a later stage. Let W be a diagonal matrix (n X n) where an element of the diagonal, Wii , is the standard deviation of the datum, d′i , calculated from the linear regression Equation (1). To standardize the data multiply Equation (5) by the inverse of W, and let A = W−1A' and d = W−1d', so that in the standardized coordinate system Equation (5) becomes
and every datum has unit variance.
We want to know the nature of m, but before proceeding a brief review of eigenanalysis is worthwhile and a definition of terms is necessary.
The decomposition theorem and the natural inverse
Consider the “shifted eigenvalue problem” of Lanczos (1961, p. 117) for any real (n × m) matrix A;
where the superscript t indicates matrix transpose. This system of equations can be solved for v, u, and λ because substitution yields the uncoupled eigenvalue problems
and
which can be solved using well-known methods. For any matrix A there will exist p non-zero eigenvalues λi where p is less than or equal to the minimum of n and m. Associated with each non-zero eigenvalue λi there exists a “model eigenvector” v i and a “data eigenvector” u i . The matrix Λ is defined as the (p X p) diagonal matrix with Λii = λ i and ordered such that Λ ii Λ jj (i j). Define V as an (m X p) matrix whose jth column is the model eigenvector v j with eigenvalue Λ jj . The (n X p) matrix whose jth column is the data eigenvector u j is also associated with Λ jj , and is defined as U.
It can be shown that UtU = I and VtV = I where I is the identity matrix. Both UU t and VVt , however, equal the identity matrix only when n = m = p.
The “natural inverse” B of the matrix A is defined by Lanczos (1961, p. 124) as
and has the property that
and
For the case where m = n = p, AB = I and BA = I, and B is the conventional inverse of A.
Model construction
To find an ice model m that fits the data in the least-squares sense, minimize
The least-squares solution is
and provides an estimate of the real ice-mass change m. Furthermore, this least-squares solution is identical to that found by using the natural inverse of A,
The model estimate found in this manner has absurdly large changes in ice thickness. It has over 300 km of ice melting very close to a region where ice accumulation was of the same magnitude. If the data were perfectly accurate, this solution would be the most likely solution. However, the data have errors, so we should relax the least-squares criteria in the hope of finding a physically plausible ice model. We do this by minimizing εs2 subject to the constraint that the variation in the model, mtm, is equal to some arbitrary scalar y. As y decreases the model variation decreases, giving a more plausible model. However, the error of fit increases, but if this error is within the errors of the data, then the more plausible model with reduced variation is just as likely as the model found from the standard least-squares procedure.
Using the method of Lagrange multipliers, define a function F such that
and α is the Lagrange multiplier. To find the optimum values of F, differentiate with respect to the ice model m and set the result equal to zero:
This expression can be solved for , the model estimate. As the Lagrange multiplier α is a function of y, we are at liberty to vary α and calculate mtm to determine y. As a increases, the variation in the model decreases, but the fit to the data becomes worse, as Figure 5 indicates. The variation in the model, here given as the root-mean-square value of change in ice thickness,
is plotted against the root-mean-square error in fitting the non-standardized data
for each value of α in Figure 6. We would like both the model variation (I RMS) and the error of fit (d'RMS) to be small, but Figure 6 shows that as I RMS is reduced, d’RMS increases. Figure 6 is therefore known as a “trade-off curve”, because we must sacrifice some of our ability to fit the data if the model variation is to be physically plausible. Such trade-off curves are an important and useful feature of inverse calculations because they indicate how an improvement in knowledge of one aspect of the model may, simultaneously, cause increased difficulties in the interpretation of the model. Through careful considerations a preferred position on the trade-off curve, usually close to the “knee” of the curve, is selected. Fortunately, for this study the model variation can be reduced considerably while the error in fitting the data barely increases. However, when I RMS is 1 200 m, and d'RMS is 9.6 cm, the error of fit increases disproportionately to the change in model variation. The ice model estimate at this point is termed the “best” model because it is physically plausible with a reasonable error of fit. This model is not exactly at the “knee” of the curve because of later considerations involving the accuracy of . For the best model the ice thickness change at each ice point is given in Table II, and Figure 7 indicates the close agreement between the calculated change in relative sea-level and the observations. There seems to be a systematic error, however, because the predicted emergence is too high for points distant from the ice and too low for points close to the ice. The initial assumption to disregard the viscous response of the Earth may be the cause of this slight systematic error.
Useful insight into the effect of varying a is gained by substituting the decomposed form of α into Equation (7) giving
Multiplying by Vt and making some rearrangement yields
From Equation (6) is VVtm so
When α = 0 this expression reduces to the standard least-squares solution for the model, but when α ≥ Λ2 jj , then the eigenvector associated with that eigenvalue is attenuated. As α increases, it suppresses eigenvectors associated with even larger eigenvalues and, in effect, reduces the rank p of the matrix to some integer q < p. For this study the best model is determined when all but eight eigenvalues are suppressed, and q equals eight.
Model Appraisal
The ice model constructed above is not the only model that fits the data. In fact, since the data have errors, there are an infinite number of model estimates that fit the data sufficiently well (see, for example, Figure 6). Furthermore, Reference Backus and GilbertBackus and Gilbert (1970) show that even for a fixed value of d’RMS there are an infinite number of suitable models. We assume that the real ice model is among this infinite set of models and, although we cannot hope to single out this real model, we can, at least, determine something about the nature of the entire infinite set. From Equation (6) = Rm where R ≡ VVt and we refer to R as the resolution matrix. The vector m represents all of the suitable models, including the real ice model, and is only an estimate of these models. Reference Backus and GilbertBackus and Gilbert (1970) show that every suitable m has the same resolution matrix and model estimate, . We can, therefore, calculate quantities (R and ) that are common to all suitable models, including the real model. Through inspection of R, we can decide how useful is in estimating each of the m. If is a good estimate, then we know something about m and, in particular, the real ice model. If is a poor estimate, then we can say very little about m or the real ice model, but at least we can learn something about how the data limit our knowledge. This result also warns us that we should not have great faith in just because fits the data, and it assures us that we will recognize such a situation when it arises.
If R is the identity matrix, then n = m p, = m, and the real ice model is determined uniquely (for an error-free problem). The extent that R departs from I is an indication of how useful is in estimating m. If Rii is unity and all other elements in the ith row of R are zero, then = mi , and our estimate of mi is extremely good. But if all members of the ith row are equal, then the calculated value is only an estimate, that is shared by all m, where &mcirc;i is averaged over all ice loads and &mcirc;i is a poor estimate of mi . Although such an averaged estimate contains some information about the real ice load, there is no specific information about the suitable models at position r i .
The resolution matrix for the best model (q = 8) is illustrated in Figure 8. Each series of connected bar graphs represents a row in the resolution matrix. A peak approaching unity along the diagonal is ideal, but this is not the case here. Only ice loads 2, 5, 6, 11, 12, 14, and 15 are well resolved. Loads 1 and 3 are averaged predominantly over each other. For example, &mcirc;i for ice point 3 is actually an estimate obtained from averaging the load that was on both points 1 and 3. The remaining ice loads are very poorly resolved and do not yield useful information about specific ice changes.
Estimate of model variance
We have calculated a model estimate and have indicated how well that model estimate approximates the real ice changes. One remaining question to ask is, “How much variance is expected in the model estimate due to errors in the data?” This is an important and necessary question to ask because may be plausible and well-resolved but with a standard error of the estimate that is enormous.
If x = a 1 y 1+a 2 y 2 then the variance of x is
From Equation (6) the model estimate is
so the variance of this estimate, caused by random errors in the data, is
and we consider only the diagonal terms of the matrix on the right. Because the standardized data have unit variance, Var (d) = I and the variance of the estimates are the diagonal terms of
or
We give the standard deviations for each estimated parameter of the best ice model in Figure 9 as error bars around mk . We also include the values of the diagonal terms in the resolution matrix. Only ice models with high resolution should be considered when interpreting the results.
In general, the ice thinned by about 1 200 m with a standard deviation of 300 m and so the error does not overwhelm . Ice load 12, however, has good resolution but a model estimate slightly smaller than the standard deviation (350 m±380 m) indicating that we should not attach great significance to this estimate.
Trade-off between resolution and variance
We can reduce the model variance by eliminating small eigenvalues in Equation (8) (reducing q), but this also affects the model construction, the error of fit, and the resolution matrix, because we are also obliged to reduce the number of eigenvalues and eigenvectors when calculating each of these.
We have already shown how the variation in the model and the error of fit are linked. Now we will consider the trade-off between model resolution and model variance.
A useful indication of the total resolution of a model is the root-mean-square value of the diagonals of R,
When R RMS = 1 there is perfect resolution and as RRMS decreases, the resolution becomes poorer. For a general estimate of the variance, consider the root-mean-square value of the model standard deviations of ice thickness,
where .
Figure so indicates that as we reduce the number of non-zero eigenvalues (q), the standard deviation of the estimate decreases but the resolution becomes poorer. The least-squares solution retaining all eigenvalues (q = 15) has excellent resolution, but has a standard deviation in ice thickness of about 360 km. The model is known specifically at a single ice point, but it is also highly inaccurate. At the other extreme, when very few eigenvalues are retained, the model is very accurate but reveals very little about ice changes at any one position. Either extreme gives results that are difficult to interpret, so we seek a compromise and again consider a trade-off curve. This trade-off curve (Fig. 11), determined from Figure 10, illustrates how resolution and accuracy are related. The best model is near the knee of the curve and therefore has a useful balance of resolution and accuracy (R RMS = 0.68; σRMS = 280 m).
Data importance
In calculating the model estimate, the least-squares procedure does not give equal weight to all of the data because of redundancies, errors, or insensitivity to ice changes because of great distances from the ice. An estimate of the relative importance of each datum is useful in determining how the data constrain the model. Furthermore, if the importance is not a function of the actual value of the datum, the importance can be used in estimating priorities for data collection in the field because we then know which data will be of greatest benefit.
The model estimate gives only an estimate of the data , where
but is defined (Equation (6)) as the product of the natural inverse and the standardized observations,
Substitution into Equation (9) gives
where D is an (n X n) matrix defined equal to UUt , so the data estimate is actually an averaged value of the observations. Reference Minster, Minster, Jordan Molnar and HainesMinster and others (1974) suggest that the diagonal terms of D be termed the “importance values” of the data. They show that the importance value is low if the accuracy of the corresponding datum is poor, if the datum is redundant, or if it is not particularly sensitive to changes in the model. One final benefit is that D does not depend upon the values of the data, only their accuracy and positions relative to the model points (in this linear problem). The sum of the importance values always equals m, the number of model points (Reference Minster, Minster, Jordan Molnar and HainesMinster and others, 1974). Because the column vectors in D are orthogonal, a rotation of D exists such that D becomes a diagonal matrix with ones as the first m diagonal terms and zeros elsewhere indicating that the total information contained in the data is also m. For our problem m = 15 so the percentage of information for datum i is (Dii/m) 100. Table III lists the importance values and cumulative percentage of the information contained in the data. We see that over 50% of this information is contained in only 8 of the 28 data points. These are all points that are near an ice load.
Only 8 of the original 15 model eigenvectors are used to construct the best model, so we use only 8 data eigenvectors in calculating (U is an n X q matrix). This modified D matrix is illustrated in Figure 12 showing that individual data points near ice loads constrain the model estimate much more than distant points, whose data estimates are actually an average of all observations. Fewer data points at great distances would be adequate, and more data near the ice loads would improve the information contained in the data considerably.
Discussion
In selecting the best ice model we considered (1) error of fit to the data, (2) model resolution, (3) model accuracy, and (4) root-mean-square variation in the model. The standard least-squares model has excellent resolution but very poor accuracy and so is difficult to interpret physically. Errors in the data allow the relaxation of the least-squares criteria so that the variation of the ice model is reduced at the expense of greater error in fitting the data. This reduction of the variation in the ice model also causes the resolution to become poorer but improves the accuracy of the model. In selecting the best model, the ice model variation, the error of fit, the resolution, and the model accuracy must all be acceptable, because, if any one of these is unacceptable, then the resulting model is difficult to interpret. Figure 13 shows the results for the best model (IRMS = 1 200 m; d'RMS = 9.6 cm; R RMS 0.68; σRMS = 280 m) which, we believe, is a good compromise among the above criteria.
The predicted ice thinning west of Muir Inlet is 1 000 m while ice to the east thins about 1 500 m with a standard error of 350 m and the Geikie Glacier (m 5) has a predicted thinning of 1 000 m (σ = 480 m). Although a glacier advance is predicted for the glaciers surrounding Tarr and Hopkins Inlet (m 7, m s, m 9, m 10), the resolution for this region is very poor. The Brady Glacier (m 6) and the Juneau Ice Field (m 15) both have enormous predicted thinning (2 260 m and> 2 820 m respectively) but the area over which this thinning was supposedly occurring is an underestimate. Increasing the assumed ice area to more realistic values while holding the mass of melted ice constant decreases the thinning to about 1 500 m.
The initial assumption to disregard the time-dependent effect of viscous relaxation can now be tested by an independent data set—the historical observations of glacial thinning. Only with very good estimates of the actual amount of ice thinning between 1910 and 1960 can we make a reliable estimate of the proportion of emergence caused by the instantaneous effects. The maximum observed ice thinning in Muir Inlet between 1880 and 1946 is 790 m (Reference FieldField, 1947) with a glacial retreat of 15 km. In general, the amount of glacial melting was about 50% of this maximum value. Assuming that the melting rates reported by Reference FieldField (1947) for varying time spans are applicable to our entire 50-year period, the Adams Glacier thinned by 4.00 m while the Casement Glacier as well as a stagnant ice stream east of Muir Inlet thinned by 370 m. McBride Glacier lowered by only 250 m as did the Plateau and Burroughs Glaciers, but Muir Glacier thinned by 630 m, and, between 1941 and 1946, extremely rapid melting caused the glacier surface to lower at the incredible rate of 30 m/year. The Geikie Glacier has retreated about 8 km (Reference BohnBohn, 1967, p. 107) but we know of no quantitative observation of ice thinning during that time. Predicted thinning values range from about 1 000 to 1 500 m while observed values lie between 250 and 630 m suggesting that between ¼ and ½ of the observed emergence can be explained here. Therefore a significant proportion of the observed emergence occurs simultaneously with the melting of the glaciers.
The early assumption to ignore the constant terms, K E and K e, in Equation (2) has the effect of increasing all observed emergence values by no more than 3 cm. Table III shows that, of the data points containing 50% of the information used in constructing the model, the mean emergence is 110 cm. Ignoring these correction terms, therefore, introduces approximately a 3% error which increases the predicted ice thinning by 3%. As this is a constant correction applied to all data, none of the other analyses is affected, and the standard error of the model, which is usually about 30% of the predicted ice thinning, overshadows this slight error.
Conclusions
Sea-level near glaciers is certainly quite sensitive to the mass balance of these glaciers and the emergence that occurs near the melting ice results, to a large extent, from the instantaneous effects of elastic uplift and reduced gravitational attraction of water by the melted ice. Viscous relaxation is of lesser importance when alpine glaciers, rather than continental ice sheets, are considered, perhaps because the lithosphere supports much of the load and because the characteristic rapid fluctuations of these glaciers would not permit isostatic equilibrium to be attained.
These results indicate that great care must be used when interpreting the viscosity of the Earth in this region, because only about two-thirds of the observed emergence is caused by viscous relaxation. When the immediate relative sea-level effects are not included, estimates of the Earth's viscosity (e.g. Reference CrittendenCrittenden, 1967), will be too low. The rapid emergence of Greenland shore-lines 9 000 years ago may also be due to these same immediate effects that are observed today at Glacier Bay (Reference ClarkClark, 1976). In this seismically active zone, it is tempting to interpret geodetic movements as possible earthquake precursors (Reference WyssWyss, 1976), but clearly the glacial effects upon these measurements must be removed before they can be used to predict earthquakes.
This inverse calculation is an efficient and useful method for reconstructing ice-load fluctuations that are consistent with observed sea-level data. For this example, we considered only time-independent responses, but potential perturbation Green functions for a viscoelastic Earth model exist (Reference PeltierPeltier, 1974), and we have already solved the forward problem for the time-dependent case (Reference Farrell and ClarkFarrell and Clark, 1976). The inverse calculation for the reconstruction of time-dependent ice loads, including the effect of viscous relaxation, is therefore possible. For data distant from the ice, the correction terms become important, and Equation (2) must be used instead of Equation (3). When we have overcome these complications, it will be possible to use existing world-wide sea-level data for the past 20 000 years to reconstruct the late-Wisconsin deglacial history. Such an ambitious inverse calculation requires that we know the viscosity profile of the Earth. This is currently in dispute (Reference McConnellMcConnell, 1968; Reference WalcottWalcott, 1973; Reference PeltierPeltier, 1974; Reference SmithSmith, 1974; Reference CathlesCathles, 1975), but Reference PeltierPeltier (1976) is attempting to solve this problem with an inverse calculation.
An inverse calculation is certainly a useful and efficient means of finding a model that fits the data. The real strength of the inverse calculation, however, is that it gives a means of interpreting the constructed model on the basis of the accuracy and resolution of the model and so we know how reliable the model is. With this assurance we are not worried about using inverse calculations for the immense task of reconstructing world-wide glacial melting, because the resulting model can be appraised properly. We also emphasize that the ability of the model to fit the data is only one criterion, and this criterion should not be relied upon alone, because, as we have seen, the model that best fits the data is also highly inaccurate.
We fixed the positions and sizes of the model ice points subjectively, and changes here will slightly affect the solution and its interpretation. For this over-determined case the number of ice points is limited by the number of data points, but it is possible to solve the underdetermined problem, where there may be more ice points than data points. The under-determined problem is solved using a method similar to that used here, and the extent of ice can then be more faithfully represented. For the reconstruction of the late-Wisconsin deglaciation, the inverse problem must surely be under-determined because of the paucity of complete sea-level curves throughout the world and the huge extent of the late-Wisconsin ice sheets.
Gathering field data of relative sea-level changes is both costly and time consuming. When planning field operations, we should use inverse calculations to determine the data importance of the proposed study sites and use this estimate to guide our priorities in the field.
This study only considered glacier thinning from 1910 to 1960, but glacier observations have been much more frequent and more accurate since that time. It would therefore be very useful to re-occupy the benchmarks and to analyze the emergence during the past 15 years with the method outlined above. Such a study would give a better indication of the proportion of emergence that is caused by viscous relaxation of the mantle. Establishing new benchmarks in suitable places will improve the resolution of certain ice points and greatly extend our knowledge of the processes relating glacier mass balance and sea-level fluctuations. Sea-level data from Tarr and Hopkins Inlets will be particularly useful.
Acknowledgements
I am indebted to Martin L. Smith and E. Robert Engdahl for introducing me to inversion theory and for their encouragement and help throughout all aspects of this work. Without their support I would not have completed this project. William E. Farrell and Gifford H. Miller supplied valuable criticism of an earlier version of this work and numerous discussions with Keith Echelmeyer honed my understanding of inversion theory. William E. Farrell and W. Richard Peltier kindly supplied the necessary Green functions and the University of Colorado Computing Center furnished all of the computer time. My support was from NSF Grants DES74-13047-A01, EAR74-13047-A02, and NOAA Contract Number 03-5022-94.