Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-25T04:40:22.505Z Has data issue: false hasContentIssue false

Calculation of two-dimensional avalanche velocities from optoelectronic sensors

Published online by Cambridge University Press:  14 September 2017

Jim N. McElwaine*
Affiliation:
Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, England E-mail: jnm11amtp.cam.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Optoelectronic sensors using infrared light-emitting diodes and photo-transistors have been used for measuring velocities in snow avalanches for more than 10 years in America, Europe and Japan. Though they have been extensively used, how they should be designed and how the data should be processed has received little discussion. This paper discusses how these sensors can be applied to measure two-dimensional velocities. The effects of acceleration and structure in the underlying field of reflectance are carefully accounted for. An algorithm is proposed for calculating the continuous velocity vector of an avalanche, and a sketch of the mathematical analysis given. The paper concludes by suggesting design criteria for such sensors.

Type
Research Article
Copyright
Copyright © The Author(s) [year] 2004

1. Introduction

Optoelectronic sensors have been used for a long time to measure the velocities inside granular flows. Some of the earliest work was done by Reference Nishimura, Maeno and KawadaNishimura and others (1987) on snow avalanches and continued in Reference Nishimura, Maeno, Sandersen, Kristensen, Norem and LiedNishimura and others (1993). Early work was also done by Reference Dent, Burrell, Schmidt, Louge, Adams and JazbutisDent and others (1998) and measurements were taken from the “Revolving Door” avalanche path near Bridger Bowl, MT, U.S.A.

The basic design of these sensors is simple. An infrared light-emitting diode emits light that is reflected by the passing granular material and this is detected by an infrared-sensitive photo-transistor, amplified, digitized and stored on a computer. By comparing the signals from nearby sensors it is possible to calculate the velocity of the flow.

In theory, it is possible to calculate many other pieces of information about the flow since the magnitude of the backscattered light depends on the density, type, size and orientation of the snow crystals. However, though Reference Dent, Burrell, Schmidt, Louge, Adams and JazbutisDent and others (1998) tried to relate reflectivity to snow density, they failed because crystal size and type are much more important than density. Some gross aspects of the flow can be determined, however. For example, in deposited snow the signal will be constant, in a powder cloud the signal will be very low since no light from above can reach the sensor and the density is usually too low to significantly reflect light, and above the snow a high level will be detected due to ambient lighting.

Despite the widespread use of optoelectronic sensors, there appears to have been little work done on how these sensors can be applied to two-dimensional measurements. In Reference McElwaine and TiefenbacherMcElwaine and Tiefenbacher (2003) a detailed analysis is developed for two element sensors, and the standard cross-correlation algorithm is analyzed in detail. The main results of this work are that the measured velocities are consistently too high by

(1)

where υx is the velocity parallel to the sensor, υy the perpendicular velocity, ax and ay the corresponding accelerations and T the width of the correlation window. The paper also shows that T must be sufficiently large so that Lc/(Tυ) is small, where Lc is the correlation length of the snow, in order that the minimum of the cross-correlation can be located, and that Lcυy/(Lυ), where L is the sensor element separation, must be small so that there are correlations between the sensor elements. These requirements that the bias is low, and that the correlation exists and can be found cannot be simultaneously satisfied unless υyx, Taxx and Tayx are all small. A novel analysis method is briefly presented that changes the time leading to acceleration errors T, the width of the correlation window, to L/υx the transit time over the sensor. This dramatically reduces the acceleration-induced bias, but the bias from υy cannot be eliminated satisfactorily from a one-dimensional sensor. This paper begins by discussing how a four-element sensor will perform with different flow fields. This insight is used to develop a continuous estimation procedure for two-dimensional velocity which is introduced and analyzed.

For convenience, in this paper we ignore discretization errors in time and regard functions as continuous. This simplification can be made as long as the signals are properly filtered before digitization so that there are no frequencies higher than the Nyquist frequency (half the sample frequency). Continuous functions are then defined by their Fourier interpolation.

2. Interpretation of Lag Times

2.1. Effect of acceleration

One of the simplest situations is shown in Figure 1. An edge is moving past four photo-transistors with centres at xi. Only the velocity and acceleration normal to the edge, that is in the direction n̂, can be resolved. This is known as the aperture effect (Reference JähneJähne, 1997, ch. 13).We write υ = n̂ . v and similarly for the acceleration a = n • a. If a is constant, then the arrival time of the edge at each sensor τi will be given by

Fig. 1. Schematic of a four-element sensor with a straight edge.

(2)

There are four equations, one for each sensor, and four unknowns, which are v the normal velocity, a the normal acceleration, y the position of the edge at t = 0 and n̂ the normal direction to the edge. A convenient choice of coordinates is n • x1 = L/2 cos θ, n • x2 = -L/2 cos θ, n • x3 = L/2 sinθ and n • x4 = -L/2 sinθ so the the diagonal length between the sensors is L. Equation (2) for each i can now be solved to give

(3)

where the following auxiliary variables have been defined

(4)

Using y, υ and a from Equations (3) we know that at time t the position of the edge projected in the direction n̂ is y þ vt þ 1/2 at2 and the velocity is υ + at. A natural choice is to calculate the time t0 when the edge is over the centre of the sensor and then calculate the velocity at this time. This gives a quadratic equation for t0 which has the following solution:

(5)

This has been written so that the correct solution is chosen regardless of the sign of cos(2ϕ). The velocity at this time is

(6)

The acceleration a and angle θ, given by Equations (3), are both independent of time in this model. Thus these two equations along with Equation (6) can give an estimate of the flow properties at the time t0 given by Equation (5).

There are several salient features of these equations. First they are exact for all accelerations, angles and velocities. Themean time τ̄ only occurs in Equation (5) specifying the time at the centre of the measurement. The angle, velocity and acceleration only depend on the differences between the lag times τi. To understand these equations it is helpful to invert them.

(7)

where υ is the velocity at the centre point and ϵ = aL/υ2 is the relative velocity change over the size of the sensor. For ϵ > 1 the velocity can change direction so that the edge will not reach the sensor for certain angles, and this is shown in the above equations by the square roots becoming imaginary. By considering small relative accelerations the nature of λ as a measure of relative acceleration and δ1 and δ2 as velocities along the coordinate axes is clear. The above expansions in ϵ are uniformly valid for but the inverse expansions treating λ as a small parameter break down when cos 2θ is small. This is because when the edge is moving parallel to the sides of the square it is no longer possible to calculate the acceleration. This problem will be expanded upon when we discuss the errors.

To calculate the effect of errors in the four τi we assume that they are random variables with mean μi and independent mean squared errors with variance σ2. These errors arise from quantization, statistical fluctuations, electronic noise and deviations by the reflectance field from our model. Allowing the errors to be dependent only affects the results by a small factor and is not important.

Consider an estimate of some property ĝ = g(τi). The mean squared error about the exact value g(μi) can then be calculated as follows.

(8)

Using this formula for each of υ̂, â and θ̂, the errors can easily be calculated. The expressions are rather long, however, so we only consider the case of small ϵ and expand.

(9)

These equations show that there is a very strong dependence of the error on the angle for the acceleration and the velocity. For small cos(2θ) the expansions break down and the errors can be arbitrarily large in calculating υ and a. This breakdown occurs for any size of ϵ, though the equations are not included here. Different estimations that avoid this breakdown will be discussed in the following subsections. These results also show that the size of the errors is determined by the dimensionless grouping συ/L. This is exactly the uncertainty one would expect if the sampling period is σ and the lag times τi can be located to this accuracy. If this is the case, then the larger L is, the smaller the errors will be. In general, however, this is wrong for two reasons. If a feature or edge is diffuse, the lags can be calculated with a precision that is much greater than the sampling period. Also as L increases, the correlation between the sensors will decrease and may disappear, so that σ increases without bound. A different approach is needed to include these effects, which will be discussed later. First, however, two different modelling assumptions will be introduced.

2.2. Effect of curvature

The previous subsection considered the case when the curvature of the flow was constant over the scale of the sensors but the velocity was allowed to vary. In this subsection we consider the case of constant velocity, but with curvature. This would correspond to the case where the size of the sensor is comparable with that of the particles. A four-element sensor does not give enough information to resolve particle centre, curvature and velocity as this has five unknowns, but it is possible to solve for the normal velocity. The four equations for the lags at each sensor are

(10)

where y is the centre of curvature at t = 0 and R is the radius of curvature. Using the normal approximation so that y = yn̂ and v = υn̂ and the same definition of the sensor element sensors xi, the equation for each lag time is

(11)

where ṁ is the unit vector normal to n̂. For a straight edge R is infinite, so it is more convenient to work with the curvature κ = 1/R.

The four equations can be solved to give

(12)

(13)

(14)

(15)

These are considerably simpler than those involving acceleration because the curvature gives a linear time shift and thus has no effect on the velocity or normal angle. The effect of acceleration is non-linear and more complicated. In the case of small acceleration, the results give the same estimates for υ and θ. Since this model includes no acceleration, the velocity and angle estimate are for all times. A sensible choice is to regard them as applying at the mean time .

If we use Equation (12) as our estimate υ̂, and Equation (13) as our estimate –, then the errors in the estimates can be calculated as before. They are

(16)

These equations are exact and have no dependence on angle or curvature. Thus the velocity and normal angle can be equally well calculated in any direction. The estimates for the errors in and y do depend on the angle and are also unstable for small cos 2θ, thus again showing that the sensor will be most effective when aligned with the diagonal in the flow direction.

Suppose now we use the estimates given by Equations (12) and (13) in the case of acceleration. The total squared error will now consist of a bias term and an uncertainty in τ term. The bias errors are

The variance of these estimates can also be calculated as before, but is now much simpler than for the unbiased estimates including acceleration.

(17)

(18)

(19)

(20)

These mean squared errors do depend on the angle θ, but they are always finite, so provided that the velocity does not vary too much over the size of the sensor so that ϵ is small, the estimators derived from the curvature model in Equations (12) and (13) will be more accurate. Only if the flow is very closely aligned with the sensor and accelerations are very large and the curvature can be neglected, will the estimators based on the acceleration model prove better.

2.3. Closest approach

In this subsection we consider another interpretation of the lag times when there is structure in the flow field over length scales smaller than the sensor size. In the next section when the complete statistical description is given, this is also seen to arise from averaging over many edge events.

We consider a trajectory given by (τ - t)v+ 1/2(τ - t)2a so that it passes through the centre of the sensor. The lag time τi for sensor i is the point of closest approach, so that

(21)

Each of these equations is a cubic in τi which can be solved exactly, but for simplicity we only consider series solutions for small acceleration a. In this case we interpret θ as the direction of the velocity v, let and let ω be the angle between v and a, so that a . v = aυ cos ω. Expanding in terms of the non-dimensional ϵ and substituting the τi into Equations (4),

(22)

(23)

(24)

(25)

Equation (24) shows that the mean time is only affected to lowest by acceleration in the direction of the velocity as would be expected. Substituting these results into the simple estimators υ̂ = L/ γ and ɵ̂ = ϕ we can calculate the bias and variance as before.

(26)

(27)

What these results show is that under the assumptions of this section the simple estimators for υ (Equation (12)) and θ (Equation (13)) also perform well. That is, the bias and the variance will be small for all angles provided that the velocity does not change too much across the width of the sensor and the lag times can be accurately calculated. In the next section we describe how to calculate the lag times and perform a complete statistical analysis of the approach.

3. Continuous Time Estimation

The continuous time estimation approach was described briefly in Reference McElwaine and TiefenbacherMcElwaine and Tiefenbacher (2003) for two-element sensors. In this paper we show how the same method can be applied to four-element sensors. The field of reflectance f(x) is regarded as constant in time and corresponds to the value one element of our sensor would output if placed at a location x. For simplicity we do not include temporal variation in f, but this is not significant as the differences are very similar to changes in velocity. Let fi(t) be the signal received by sensor i at time t which is F(y(t)) where y(t) is the relative trajectory of the centre of the sensor to the flowing medium. The method consists of minimizing the Lagrangian

(28)

The variables to solve for are τi, the lags at each sensor, and f(t), the estimated underlying value at the centre of the sensor. The first terms of L penalize the discrepancy between the lagged sensor signals and the estimated signal f. If there were no constraint on the lag functions then these could be made zero but the lags would be unphysical. For this to be well posed, certainly the τi(t) must have no frequencies higher than the sampling frequencies. In fact we expect only slow changes in the velocity, so a simple term that enforces this has been chosen, which has nice features. It is analytically tractable and corresponds to linear interpolation when there is little information present in the signal. is a parameter that determines the strength of this smoothing. At least one additional constraint on the τi(t) is also necessary. Using our results from the previous section which show that estimators based only on δ1 = δ cos ϕ and δ2 = δ sin ϕ give good results, we use τ1(t) = t + δ1/2, τ2(t) = t - δ1/2, τ3(t) = t þ δ2/2 and τ4(t) = t - δ3/2. It would also be possible to try to calculate curvature, but imposing an additional constraint simplifies the numerical implementation and means the the residuals fii(t)) - f(t) contain much useful information about the accuracy of the method. The Lagrangian (Equation (28)) then becomes

(29)

where the dependence of δi on time has been dropped. Solving this variational problem is straightforward. The estimated field value f is simply the average of the lagged measurements

(30)

If there is additional knowledge about the underlying field f this could be incorporated here, but failing that the mean field can be eliminated. The Euler–Lagrange equations for δi are then

(31)

(32)

where each fi is evaluated at the appropriate lagged time This can be solved efficiently numerically using an implicit multi-grid scheme.

A rigorous analysis of these equations will not be given here, but an indication of the nature of the solutions and the errors involved will be briefly discussed. The reading at each sensor is fi(t) = f(xi - y(t)). Taylor expanding each fi in Equation (31) to lowest order about -y(t) gives

(33)

(34)

where x1 - x2 = Lx̂ and x3 – x4 = Lŷ. The equations for δi are now decoupled. Letting the signal power, and the equation for δ1 can then be written

(35)

The righthand side of this vanishes when δ1 = L(x̂ . n̂)/(v . n̂), which is precisely the condition (neglecting acceleration) in Equation (7), that is the lag time for the arrival of an edge with normal direction n̂ travelling with velocity v. Departures away from this value are governed by the size of g(v . n̂)2/2. When this is large, δ1 will closely match the lag. When it is smaller, larger deviations will be allowed and δ1 will be an average of this and be approximately linear. This equation can be analyzed by splitting up the multiplier into its mean part and its varying part. If f is isotropic, then g and n̂ are independent, and n̂ is distributed uniformly on the unit circle. So defining which depends only weakly on t through changes in υ, and Equation (35) becomes

(36)

The Green’s function for the lefthand side (that decays for large neglecting variation in α. So we can write

(37)

This equation can be used to calculate the statistics of δ1(t). It is most easily understood in the frequency domain. The Fourier transform of the Green’s function is α2/(α2 + ω2), that is, it is a low-pass filter with cut-off frequency α. If the dominant frequencies in n̂(t) are above this then they can be averaged to give

(38)

The approximation for δ2 is similar, so that up to terms in acceleration we have the same results as in Equations (22) and (23); thus the velocity can be estimated as

(39)

The above heuristic justification can be made precise and the errors quantified, but the analysis is complicated and the brief discussion above contains the salient features.

Though the analysis and implementation for the continuous method described in Equation (28) are more complicated than for the traditional approach of calculating lags at fixed times using a fixed window, there are several important advantages of this approach.

1. The primary errors due to acceleration enter through the time-scale defined by the traverse size of the sensor L/υ, not the window width needed to calculate the cross-correlation. The acceleration errors will therefore be an order of magnitude smaller. The smoothing needed to overcome the aperture effect acts as a low-pass filter removing high frequencies in the velocity, rather than introducing bias. If there is prior information about the wavelengths in f(x), for example if the particle size distribution is known, the regularization term _2 (equivalent to ω2 in the frequency domain) can be changed so as to produce any desired filter.

2. The effective smoothing width 1/α depends inversely on the signal strength, so that velocity is automatically interpolated where the signal is weak and errors are reduced where the signal is strong.

3. The cross-correlation approach can fail by finding an incorrect global maximum that is unrelated to the lag time (Reference McElwaine and TiefenbacherMcElwaine and Tiefenbacher, 2003), resulting in effectively infinite expected mean squared error. The errors in this continuous approach will only increase gradually as the signal degrades and the mean squared error will always be finite until there is total lack of correlation.

4. Design Criteria for Sensors

The design of a sensor to calculate velocities depends on several factors. In order to calculate velocities it is necessary to average over several incident edges at different angles, but as the averaging time increases, higher frequencies in velocity are filtered out. Thus maximum sensitivity will be achieved by considering the smallest wavelengths, Lc, that exist in the reflectance field f(x). This will be similar to the size of the smallest particles. For the sensor elements to detect these wavelengths the diameter of the elements should be the same or smaller. The area of the element acts as a low-pass spatial filter, so in order to satisfy the Nyquist criterion and prevent aliasing, the sensors spacing should be the same. That is, there should be no gaps between the active parts of the elements that are larger than the smallest wavelength. For flowing snow the smallest particles may be <1mm in size and it may not be possible to build such a small sensor, though this might be achievable using fibre-optic cables to connect to the photo-transistors. The sampling frequency should also be chosen to satisfy the Nyquist requirement. The highest frequencies that will be observed are ≈ υmax/(d + Lc), where d is the diameter of the active part of a sensor element. (If the sensors elements are not round, this should be the smallest dimension through the element centre.) The sampling frequency should be chosen to be at least twice this, but going much higher will add no further information. The lag times can easily be calculated with super-resolution (accuracy greater than the sampling frequency) by interpolating (oversampling) the signal. Finally if the flow has a preferred direction, such as parallel to the ground, it is best to orient this as shown in Figure 2, which is optimized for horizontal or vertical directions. This has several advantages. Most of the errors are minimized in this direction. The acceleration or curvature can be calculated as the same part of the flow passes over sensors at three different times. The spatial aliasing also depends on the angle, as it is less likely that particles will pass through the gaps. Another counter-measure, if spatial aliasing cannot be prevented, is to reduce this vertical dimension Ly as shown in Figure 3. This will reduce most errors when but increase errors when the velocity is closer to vertical.

Fig. 2. Schematic of a symmetric four-element sensor, where L = Lx = Ly and the elements are diameter d:

Fig. 3. Schematic of an asymmetric four-element sensor, optimized for horizontal flow.

These design criteria can easily be summarized in terms of the power spectrum (the energy at different wavelengths) of the reflectance field f. Energy at wavelengths less than the averaging diameter d of each photo-transistor is filtered out. Energy at wavelengths between d and the sensors spacing L appears as noise. Energy at wavelength greater than the spacing L is the useful signal energy. Since there are also fixed sources of error, the maximum signal-to-noise ratio is obtained by having d = L = Lc/2. The sampling frequency needs to be at least 2Lc/υ. For laboratory chute experiments with snow, Lc ≈ 1 mm and υ < 10m-1, so that the sampling frequency should be around 20 KHz and the sensors 1mm in size. For a real snow avalanche the sampling frequency would need to be higher, around 50 kHz. Photo-transistors are generally larger than 1mm, but by using fibre-optic cables, or a charge-coupled device (CCD) chip rather than photo-transistors, it should be possible to build such a sensor. If existing sensors are to be used, the best choice would be to mount four photo-transistors as close together as possible, perhaps with a defocusing lens in order to spatial average over the sensor spacing L.

5. Conclusions

There are many possible sources of error in calculating velocities from optoelectronic sensors. This paper has described how the careful design of such instruments and analysis procedures can reduce and quantify the errors. Further improvements could be made by investigating in detail the nature of the reflectance field f(x) and constructing a parametric model that incorporates information such as the particle size distribution. In a flow of identical granular particles, this would be straightforward and estimates of the local flow density could easily be made, by reducing the smoothing and calculating individual normal velocities and angles, rather than attempting to find the velocity averaged over several particles and orientations. Using a sensor with five elements would allow the curvature and size of the particles also to be reliably estimated, which would be important for a more complicated material such as snow when the sizes are not known.

Acknowledgements

The author was supported by a University Research Fellowship from the Royal Society. Part of this work was undertaken at the Federal Institute for Avalanche Research in Davos supported by a grant from the Swiss National Science Foundation. The author would like to thank F. Tiefenbacher for making data available from correlation sensors and B. Turn-bull and P. Bartelt for arranging the visit.

References

Dent, J.D., Burrell, K. J., Schmidt, D. S., Louge, M.Y., Adams, E. E. and Jazbutis, T. G.. 1998. Density, velocity and friction measurements in a dry-snow avalanche. Ann. Glaciol., 26, 247–252.Google Scholar
Jähne, B. 1997. Digital image processing: concepts, algorithms, and scientific applications. Fourth edition. Berlin, Springer-Verlag.Google Scholar
McElwaine, J.N. and Tiefenbacher, F.. 2003. Calculating internal avalanche velocities from correlation with error analysis. Surv. Geophys., 24(5–6), 499–524.Google Scholar
Nishimura, K., Maeno, N. and Kawada, K.. 1987. [Internal structures of large-scale avalanches revealed by a frequency analysis of impact forces]. Low Temp. Sci., Ser. A, 46, 91–98. [In Japanese with English summary.]Google Scholar
Nishimura, K., Maeno, N., Sandersen, F., Kristensen, K., Norem, H. and Lied, K.. 1993. Observations of the dynamic structure of snow avalanches. Ann. Glaciol., 18, 313–316.Google Scholar
Figure 0

Fig. 1. Schematic of a four-element sensor with a straight edge.

Figure 1

Fig. 2. Schematic of a symmetric four-element sensor, where L = Lx = Ly and the elements are diameter d:

Figure 2

Fig. 3. Schematic of an asymmetric four-element sensor, optimized for horizontal flow.