Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-22T19:41:32.309Z Has data issue: false hasContentIssue false

Optimal Fault Detection and Exclusion Applied in GNSS Positioning

Published online by Cambridge University Press:  17 May 2013

Ling Yang*
Affiliation:
(School of Surveying and Geospatial Engineering, The University of New South Wales, Australia)
Nathan L. Knight
Affiliation:
(School of Surveying and Geospatial Engineering, The University of New South Wales, Australia)
Yong Li
Affiliation:
(School of Surveying and Geospatial Engineering, The University of New South Wales, Australia)
Chris Rizos
Affiliation:
(School of Surveying and Geospatial Engineering, The University of New South Wales, Australia)
*
Rights & Permissions [Opens in a new window]

Abstract

In Global Navigation Satellite System (GNSS) positioning, it is standard practice to apply the Fault Detection and Exclusion (FDE) procedure iteratively, in order to exclude all faulty measurements and then ensure reliable positioning results. Since it is often only necessary to consider a single fault in a Receiver Autonomous Integrity Monitoring (RAIM) procedure, it would be ideal if a fault could be correctly identified. Thus, fault detection does not need to be applied in an iterative sense. One way of evaluating whether fault detection needs to be reapplied is to determine the probability of a wrong exclusion. To date, however, limited progress has been made in evaluating such probabilities. In this paper the relationships between different parameters are analysed in terms of the probability of correct and incorrect identification. Using this knowledge, a practical strategy for incorporating the probability of a wrong exclusion into the FDE procedure is developed. The theoretical findings are then demonstrated using a GPS single point positioning example.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2013 

1. INTRODUCTION

When estimating position using the least-squares estimation technique, it is expected that the calculated position conforms to a normal distribution centred at the true position. The existence of a faulty pseudorange measurement causes the estimated position to become biased. For this reason it is vital that fault detection be applied to detect the presence of a faulty pseudorange. In circumstances where GNSS is used as a primary means of navigation, however, detection alone is not sufficient. Upon detection of a fault, measurements from the “bad” satellite should be excluded before navigation can continue.

Using an outlier test for fault detection actually means making a decision between the null and alternate hypotheses (Baarda, Reference Baarda1967, Reference Baarda1968; Kelly, Reference Kelly1998; Koch, Reference Koch1999). Usually, it is judged that the pseudorange corresponding with the largest outlier statistic is faulty and is subsequently excluded (Kelly, Reference Kelly1998). During such a procedure the probability of drawing wrong conclusions cannot be avoided. These are referred to as Type I and Type II errors and are denoted α 0 and β 0 respectively. The Type I and Type II error values in fault detection are set, based on the probability of a false alert and the probability of a missed detection.

However, the outlier statistics are prone to masking and swamping and thus the wrong pseudorange can be identified (Parkinson and Axelrad, Reference Parkinson and Axelrad1988; Lee et al., Reference Lee, Van Dyke, Decleene, Studenny and Beckmann1996; Hekimoglu, Reference Hekimoglu1997; Lee and Van Dyke, Reference Lee and Van Dyke2002). Masking means that a pseudorange contaminated by a fault is identified as a good one. Conversely, swamping is when a good pseudorange is identified as faulty (Hekimoglu, Reference Hekimoglu1997). This probability of identifying the wrong pseudorange is the probability of a wrong exclusion (Lee et al., Reference Lee, Van Dyke, Decleene, Studenny and Beckmann1996). In statistics this probability is referred to as a Type III error, where the null hypothesis is correctly rejected but the wrong pseudorange is identified as being faulty (Hawkins, Reference Hawkins1980). If the probability of a wrong exclusion can be evaluated, then there is a possibility that the position can be classified as available for navigation without even having to reapply fault detection (Lee et al., Reference Lee, Van Dyke, Decleene, Studenny and Beckmann1996). In the case where the probability of a wrong exclusion is too high, fault detection would still have to be reapplied after exclusion or the position would be classified as unavailable. Nevertheless, significant operational benefit could still be gained from an algorithm that evaluates the probability of a wrong exclusion, such that the confidence level of fault detection can be assured (Lee et al., Reference Lee, Van Dyke, Decleene, Studenny and Beckmann1996).

It is for this reason that Lee (Reference Lee1995) and Kelly (Reference Kelly1998) attempt to evaluate the probability of a wrong exclusion by taking the difference between two outlier statistics. This is because two outlier statistics that are separated by a small distance have a higher probability of contributing to a wrong exclusion. Conversely, as the difference between two outlier statistics grows, there is less probability of making a wrong exclusion. But, the issue with using the difference between the outlier statistics is that it does not precisely estimate the probability of a wrong exclusion (Ober, Reference Ober2003). Another method of estimating the probability of a wrong exclusion is given by Pervan et al. (Reference Pervan, Lawrence, Cohen and Parkinson1996; Reference Pervan, Lawrence and Parkinson1998). In this method, it is assumed that the faulty pseudorange conforms to a uniform distribution. Then, using Bayesian statistics, the probability of a wrong exclusion is evaluated. The weakness of this method, though, is that the distribution of the biases is unknown. Consequently, even small changes within the assumed distribution of the faulty pseudorange can have a significant influence on the estimated probability of a wrong exclusion. Outside the field of navigation, Förstner (Reference Förstner1983) and Li (Reference Li1986) have carried out studies on the separability of two outlier statistics. Using the results of these studies Li (Reference Li1986) then defined the Minimal Separable Bias (MSB) as the smallest bias that can be confidently identified for a set Type III error. Applying the MSB to the field of navigation, the separability of various satellite constellations has been analysed by Hewitson et al. (Reference Hewitson, Lee and Wang2004), and Hewitson and Wang (Reference Hewitson and Wang2006).

While the basic Fault Detection and Exclusion (FDE) techniques have been well established, the relation between the FDE algorithm performance and the primary means integrity requirements has not. Specifically, although the formulae for the probability of a wrong exclusion and the probability of a missed detection have been developed, there is not yet a practical method that can evaluate them in application. Correctly calculating these probabilities is essential in meeting the integrity requirements of the primary means of navigation. This paper proposes and analyses new methods of correctly calculating these two quantities. Initially, the separability of two alternative hypotheses is analysed. The relationships between the probabilities of false alert, missed detection and wrong exclusion; threshold, correlation coefficient and the non-centrality parameter are discussed in detail. Then it is assumed that each outlier statistic corresponds with a fault and that there is a non-centrality parameter corresponding to it. Since the non-centrality parameter is a function of Types I, II and III error; the correlation coefficient, the probabilities of missed detection, successful identification and wrong exclusion are then estimated using the non-centrality parameter and the correlation coefficient. Thus, for each outlier statistic, the probabilities of making different types of errors are estimated to aid in deciding whether or not the faulty pseudorange can be correctly identified.

This paper is organised as follows. First, the outlier detection theory and the models used in hypothesis testing are introduced. Then their applications to FDE are examined. Thereafter, the separability of two outlier statistics is analysed, and extended to the application of FDE. Next an example is given, using real GPS data, to demonstrate the proposed method. Finally, the conclusions drawn from the study are presented.

2. MODEL DESCRIPTIONS

The linearized Gauss-Markov model applied in navigation is defined by (Schaffrin, Reference Schaffrin1997; Koch, Reference Koch1999):

(1)$${l} = {A\hat x} + {v}$$

where:

  • v is the n by 1 vector of residuals,

  • A is the n by t design matrix reflecting the geometric strength,

  • l is the n by 1 measurement vector containing the pseudorange observations and the distances between satellites to receiver,

  • x is the vector of t unknown and its estimated value is ${\hat x}$.

The mean of l and its positive definite covariance matrix are given by:

(2)$$E({l}) = {Ax},\;D({l}) = {\Sigma} = \sigma _0^2 {Q} = \sigma _0^2 {P}^{ - 1} $$

The least-squares solution ${\hat x}$ is optimal in the sense that it is unbiased and that it is of minimal variance in the class of linear unbiased estimators. However, these optimality properties only hold true when Equations (1) and (2) are correct.

2.1. Local Test for Single Alternative Hypothesis

In the case where there are faulty pseudorange measurements: E(l)≠Ax. Consequently, the least-squares estimator of the position becomes biased: $E({\hat x}) \ne {x}$. In order to detect a biased position a fault detection procedure is applied. When a biased position is detected, it can then be corrected by excluding the faulty pseudorange. If it is assumed that the i th pseudorange is faulty, then the correct model is given by:

(3)$${l} = {A\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}} {x}} + {c}_i \hat \nabla _i + {\tilde v}$$

where $\hat \nabla _i $ is the fault in the i th pseudorange, and ci=[0, …, 0, 1, 0, …, 0]T is a unit vector with the i th element equal to one. Solving this for the fault then leads to:

(4)$$\hat \nabla _i = ({c}_i^{\rm T} {PQ}_v {Pc}_i )^{ - 1} {c}_i^{\rm T} {PQ}_v {Pl\;} $$

which has the variance:

(5)$$\sigma _{\hat \nabla _i} ^2 = \sigma _0^2 Q_{\hat \nabla _i} = \sigma _0^2 ({c}_i^{\rm T} {PQ}_v {Pc}_i )^{ - 1} $$

where Qv = QA(ATPA)−1AT is the co-factor matrix of the estimated residuals (from the original Gauss-Markov model).

The outlier test statistic for the i th pseudorange can then be formed as (Baarda, Reference Baarda1968; Kok, Reference Kok1984):

(6)$$w_i = \displaystyle{{\hat \nabla _i} \over {\sigma _0 \sqrt {Q_{\hat \nabla _i}}}} = \displaystyle{{{c}_i^T {PQ}_v {Pl}} \over {\sigma _0 \sqrt {{c}_i^T {PQ}_v {Pc}_i}}} $$

The correlation coefficient between a pair of outlier statistics is given by:

(7)$$\rho _{ij} = \displaystyle{{{c}_i^{\rm T} {PQ}_v {Pc}_j} \over {\sqrt {{c}_i^{\rm T} {PQ}_v {Pc}_i} \sqrt {{c}_j^{\rm T} {PQ}_v {Pc}_j}}}. $$

Based on Equation (3) the null hypothesis corresponding to the assumption that there are no faulty pseudorange measurements is:

(8)$$H_0 :\left\{ {\matrix{ {E({l}) = {Ax}} \cr {w_i \sim N(0,1)} \cr}} \right.$$

Under the null hypothesis, Equations (1) and (3) are equivalent. Otherwise, the alternative hypothesis H i means that the i th pseudorange is faulty:

(9)$$H_i :\left\{ {\matrix{ {E\left( {l} \right) = {Ax} + {c}_i \nabla _i} \cr {w_i \sim N(\delta, 1)\; \; \; \; \; \; \; \; \; \; \;}\cr}} \right.$$

Taking the expectation of the outlier statistic in Equation (6), the non-centrality parameter can be obtained as:

(10)$$\delta (\hat \nabla _i ) = \displaystyle{{\nabla _i} \over {\sigma _0 \sqrt {Q_{\hat \nabla _i}}}} $$

2.2. Fault Detection

In the fault detection phase, the overall validity of the null hypothesis is tested:

(11)$$H_0 :\; {\it \Omega} = \displaystyle{{{v}^{\rm T} {Pv}} \over {\sigma _0^2}} \sim {\rm \chi} ^2 (n - t,\; 0)$$

If this leads to a rejection of the null hypothesis, then it is concluded that a fault is present. Ideally, the probability of a false alert in the i th pseudorange is set such that the probability of any one of the outlier tests failing, when the null hypothesis is true, is equal to the probability of a false alert, P FA. Due to the difficulty in achieving this exactly, the Type I error of the outlier test is conservatively set as (Kelly, Reference Kelly1998):

(12)$$\alpha _0 = 1 - \root n \of {1 - P_{{\rm FA}}} $$

The outlier test statistic in Equation (6) follows a standard normal distribution under H 0. The evidence on whether the model error as specified by Equation (3) did, or did not, occur is based on the test:

(13)$$|w_i | \gt N_{\displaystyle{{\alpha _0} \over 2}} (0,1)$$

By letting i run from one up to and including n, all of the pseudorange measurements can be screened for the presence of a fault. If one or more of the outlier tests fails then it is concluded that a fault exists.

Besides the possibility of making a Type I error, there is also a possibility that the null hypothesis is accepted when in fact it is false. This error, denoted as β 0, is a Type II error. Thus, it can be seen that when the null hypothesis is accepted there will be a possibility of making a Type II error. When accepting the alternative hypothesis, there is also a possibility of making a Type I error. Therefore, no matter what decision is made there is always the possibility of making an error. However, steps may be taken to control the possibility of making errors and guarantee that the probability of making a correct decision can be estimated.

By setting the threshold based on the probability of a false alert, the Type I error can be controlled. To control the Type II error, protection levels are formulated and compared with the alert limit. If the protection level is contained within the alert limit then the probability of making a Type II error is acceptable. Otherwise, it is not. When formulating the protection levels it is desired to set the size of Type II error for each test such that the probability of a fault going undetected by all of the tests is equal to the probability of a missed detection, P MD. However, due to the difficulty in achieving this, Kelly (Reference Kelly1998) uses the approximation:

(14)$$\beta _0 \le P_{{\rm MD}}. $$

2.3. Fault Exclusion

When the fault detection procedure has detected a fault, the next step is to attempt to identify and remove the faulty pseudorange. Since the null hypothesis has been rejected, the pseudorange measurements conform to one of the alternative hypotheses:

(15)$$H_i :{\rm E}({l}) = {Ax} + {c}_i \nabla _i \quad i = 1, \cdots, n$$

To determine which alternate hypothesis, the largest outlier statistic, in absolute value, is found. The corresponding pseudorange is then deduced to be faulty. Mathematically this can be expressed as the j th pseudorange is faulty when:

(16)$$|w_j | \gt |w_i |\; \; \forall i\; \quad {\rm and}\quad |w_j | \gt {\rm N}_{\displaystyle{{\alpha _0} \over 2}} (0,1)$$

Once the faulty pseudorange has been identified, corrective action must be taken to mitigate its influence on the navigation solution. Here, the identified pseudorange is excluded from the model such that Equation (1) now has one fewer pseudorange measurements. Since the incorrect pseudorange can at times be identified due to the correlation, a FDE procedure would normally be reapplied to the updated model, until the null hypothesis is accepted.

3. SEPARABILITY ANALYSES OF FAULT DETECTION AND EXCLUSION

During the FDE procedure, wrong decisions can sometimes be made. According to Förstner (Reference Förstner1983) a Type III error occurs when both Type I and Type II errors are committed, which means making a wrong exclusion. In the following sections, the origins of the three types of errors are presented. Then, this knowledge is applied to the FDE procedure to determine if a fault can be successfully identified.

3.1. Three Types of Error, Based on Two Alternative Hypotheses

In Förstner's pioneering studies, the decisions that can be made with two alternative hypotheses are described in Table 1 (Förstner, Reference Förstner1983; Li, Reference Li1986).

Table 1. Decisions when testing two alternative hypotheses.

From Table 1, it can be seen that α 00=α 0i+α 0j, and because of the symmetry of w i and w j, $\alpha _{0i} = \alpha _{0j} = {\textstyle{1 \over 2}}\alpha _{00} $. In addition, the following is satisfied:

(17)$$\beta _{ii} = \beta _{i0} + \gamma _{ij} $$

and:

(18)$$\beta _{\,jj} = \beta _{\,j0} + \gamma _{\,ji} $$

The estimation of parameters shown in Table 1 is based on the distribution of test statistics. It is assumed that there is an outlier in the i th observation, causing the expectation of w i to become δ, which is the non-centrality parameter. The bias also causes the expectation of w j to become δρ because of the correlation between w i and w j, which can be computed from Equation (6). Successful identification then actually means accepting the alternative hypothesis H i rather than H j. The joint distribution of w i and w j is:

(19)$${w} = (w_i {\kern 1pt} w_j )^{\rm T} \sim N({\mu}, {\kern 1pt} {D})$$

The expectation of the joint distribution of w i and w j in Equation (13) is then given by:

(20)$${\mu} = (\delta {\kern 1pt} \delta \rho )^{\rm T}, \,\; {\rm and}\, \; {D} = \left( {\matrix{ 1 & \rho \cr \rho & 1 \cr}} \right)$$

The probability function of the two outlier statistics is given by:

(21)$$f\,(w_i, {\kern 1pt} w_j ) = f_1 $$

If the critical value c α and the distribution of w are known, the probability of successful identification, denoted as (1−β ii), can be obtained from:

(22)$$1 - \beta _{ii} = \int_{|w_i | \gt c_\alpha, \; |w_i | \gt |w_j |} {\,f_1 dw_i dw_j} $$

The sizes of Type II and Type III errors can be obtained from:

(23)$$\beta _{i0} = \int_{|w_i | \le \; c_\alpha, \; |w_j | \le \; c_\alpha} {\,f_1 dw_i dw_j} $$

and:

(24)$$\gamma _{ij} = \int_{|w_j | \gt c_\alpha, \; |w_j | \gt |w_i |} {\,f_1 dw_i dw_j} $$

In Förstner (Reference Förstner1983) and Li (Reference Li1986) if α 0, ρ, β i0 and γ ij are given, then the non-centrality parameter δ is obtained from:

(25)$$\delta _1 = \varphi _1 \left( {c_\alpha, \rho, \beta _{i0} \;} \right) = \varphi _1 \left( {\alpha _0, \rho, \beta _{i0} \;} \right)$$

or:

(26)$$\delta _2 = \varphi _2 (c_\alpha, \rho, \gamma _{ij} ) = \varphi _2 (\alpha _0, \rho, \gamma _{ij} )$$

By setting the values of β i0 and γ ij with the same preset values of α 0 and ρ, the non-centrality parameters may be different. In this case, the greater value of δ is chosen to satisfy the requirements that the probability of a Type II error is not greater than β i0 and that the probability of a Type III error is not greater than γ ij (Förstner, Reference Förstner1983; Li, Reference Li1986). In this paper the non-centrality parameter is calculated from:

(27)$$\delta = \varphi (c_\alpha, \rho, \beta _{ii} ) = \varphi (\alpha _0, \rho, \beta _{ii} )$$

This is because the probability of making errors, β ii, remains unchanged with different correlation coefficients. Nonetheless, correlation coefficients do determine the ratio between β i0 and γ ij. Consequently, given preset values for α 00 and β ii the non-centrality parameter will change along with the correlation coefficient. For this non-centrality parameter the probability of making Type II and Type III errors will not be greater than β ii, and in addition their sum will be equal to β ii.

3.2. Relationships between Different Parameters

Although there are many parameter factors that control the probabilities of making different types of error with two alternative hypotheses, there are only three that are fundamentally independent. They are α 0, ρ and β ii. Any other parameters can be obtained as shown in Table 2.

Table 2. The relationship among different parameters.

From Table 2 it can be seen that once α 0 and ρ are given, the threshold c α, and the size of the Type I errors α 00, α 0i and α 0j can be estimated. Also by setting β ii as well, δ, β i0 and γ ij can be calculated.

The changes within β i0 and γ ij are shown in Figure 1 when β ii is equal to 20% and α 0 is set to values of 0·1%, 0·3%, 0·5%, and 1%. This illustrates the fact that the Type II error β i0 and the Type III error γ ij have opposite tendencies as the correlation coefficient increases. For small correlation coefficients β i0 is large and γ ij is small, whereas for large correlation coefficients γ ij is large and β i0 is small. When ρ is close to zero, β i0 is around 20% and γ ij is approximately zero. This is irrespective of the size of the Type I error. Conversely, β i0 reduces quickly to about zero when ρ approaches 0·98 and γ ij increases rapidly to 20%.

Figure 1. Type II and III errors.

The above analyses are based on the preset values of α 0 and β ii in order to determine the dependence of the other parameters on ρ. In the following analysis, α 0 is preset to 1% and changes in δ and ρ are compared to β ii, βi 0 and γ ij.

The value of β ii, changing with δ and ρ, is shown in Figure 2, which demonstrates that a larger correlation coefficient leads to higher value of β ii when δ is kept constant. In addition, larger δ results in smaller β ii for the same value of ρ. This means that higher δ and smaller ρ will enhance the probability of correct identification. When δ becomes larger, the impact of the correlation coefficient on β ii becomes much more significant. When ρ is zero, β ii decreases quickly from around one to zero as δ increases. This means that when the outlier statistics are independent from each other the probability of committing errors can be controlled to near zero, once δ is large enough. However, as ρ approaches 1, even increasing the non-centrality parameter to 20 still does not reduce β ii below 40%. This indicates that when the correlation coefficient is approximately 1 the non-centrality parameter only has a small effect in decreasing β ii. Consequently, to control the probability of making errors, the goal should be to keep the correlation coefficients, between outlier statistics, to minimum values.

Figure 2. The sum probability of making errors β ii=β i0+γ ij.

The values for β i0 and γ ij are shown in Figure 3 and Figure 4 respectively. Figure 3 shows that β i0 decreases from about one to zero as δ increases. The decrease within the β i0 curves for different ρ are similar. Thus, the correlation coefficient plays a relatively minor role in influencing β i0. Figure 4 shows that the correlation coefficient significantly impacts γ ij. All of the curves quickly increase to a peak and then decrease slowly. The larger the correlation coefficient values the higher the peak and the slower the decrease. For large correlation coefficients, the peak also occurs at larger values of the non-centrality parameters. When ρ is zero, γ ij is always close to zero no matter what the value of δ. When the correlation coefficient is close to 1, γ ij increases dramatically from about zero to nearly 50% and then decreases rather slowly. Therefore, the dominant challenge in successful identification is avoiding making a Type III error, when the correlation coefficient is high. This is because it is difficult to control the probability of so doing by increasing δ.

Figure 3. Type II error β i0.

Figure 4. Type III error γ ij.

Based on the above analysis, it can be deduced that the probabilities of making Type I, II and III errors can be accurately estimated based on their correlation coefficients and their non-centrality parameters. These parameters can also be used to provide an accurate control for making correct decisions in a FDE procedure. For instance, a large correlation coefficient implies a greater chance of incorrect identification. Under these circumstances, a larger non-centrality parameter is required to control the probability of committing a Type III error.

3.3. Fault Verification

As can be seen, the test statistic is a function of all observation errors, and the correlation coefficient between each test statistic contributes to missed detection and wrong exclusion. This section presents a practical procedure to estimate the probability of a missed detection and the probability of a wrong exclusion. The calculation is based on the assumption: that if one considers only the bias error on one of the satellites, at a time, and neglects the range errors on all the other satellites, then the position estimation error and the test statistic become linearly proportional, and the slope varies depending on which satellite has the bias error. The satellite that is the most difficult to detect is the one with the maximum correlation coefficient and with the highest probability of a wrong exclusion. The probability of a missed detection is highest for the failure of that satellite (Lee, Reference Lee1995; Reference Lee, Van Dyke, Decleene, Studenny and Beckmann1996).

From Equation (6), when only the bias error on the i th observation is taken into consideration and the random errors on other satellites are neglected, the test statistic simplifies to:

(28)$$\bar w_i = \displaystyle{{{c}_i^{\rm T} {PQ}_v {Pc}_i \varepsilon _i} \over {\sigma _0 \sqrt {{c}_i^{\rm T} {PQ}_v {Pc}_i}}} $$

The influence of this bias error on another test statistic is:

(29)$$\bar w_j = \displaystyle{{{c}_j^{\rm T} {PQ}_v {Pc}_i \varepsilon _i} \over {\sigma _0 \sqrt {{c}_j^{\rm T} {PQ}_v {Pc}_j}}} = \rho _{\,ji} \bar w_i $$

However, as the real circumstance is never known, the hypothesis test is an inverse procedure, which actually uses the estimated statistic to deduce the unknown fault. According to Table 1, the second column of the test result shows that during the classical FDE procedure, when the greatest absolute value is pointed to the i th test statistic, there are two types of possibility, either resulting in a successful identification or a wrong exclusion. A wrong exclusion means that the outlier occurring on the j th observation impacts on the i th test statistic so that w i becomes greater than the critical value. Consequently, the expectation shift of w i may originate from:

(30)$$\delta _i \approx E(\bar w_i )$$

which will lead to a successful identification, or:

(31)$$\delta _j \approx \displaystyle{{E(\bar w_i )} \over {\rho _{ij}}} $$

which means a wrong exclusion will be committed after the test.

Based on Equations (30) and (31), the non-centrality parameter corresponding to the largest test statistic can be obtained. Then, using each non-centrality parameter and the correlation coefficient, the probability of successful identification and wrong exclusion can be calculated from the relationship in Table 2. There is:

(32)$$1 - \beta _{ii} = \varphi (\alpha _0, \rho, \delta _i )$$

or:

(33)$$\gamma _{\,ji} = \varphi (\alpha _0, \rho, \delta _j )$$

As it is complicated and time consuming to exactly calculate β ii and γ ji via numerical integration, the approximate solution by interpolation could be obtained via the grid data illustrated in Figures 2, 3 and 4, once the grid data is accurate enough. Comparing the estimated values of β ii and γ ji with the corresponding preset thresholds, decisions about the successful identification and wrong exclusion can be made.

4. QUALITY CONTROL FOR FAULT DETECTION AND EXCLUSION

In this section a practical procedure for controlling the quality of the FDE procedure is introduced based on the above analysis. The main proposal is to estimate the probability of a missed detection and a wrong exclusion based on the separability analysis of two alternative hypotheses. The probability of a missed detection and a wrong exclusion depends on the magnitude of the bias, of which the receiver has no knowledge and which may continuously vary. For this reason the simulation tests for the probabilities of a missed detection and wrong exclusion verification were designed as follows:

  • Step 1. Form the observation equation ${v} = {A\hat x} - {l}$.

  • Step 2. Calculate the protection level based on the probability of a false alert and the probability of a missed detection.

  • Step 3. If the protection level is less than the alert limit, then proceed to Step 4. Otherwise, the system is unavailable for navigation, so proceed to step 1 for next epoch.

  • Step 4. Calculate the outlier statistics and compare them with the threshold.

  • Step 5. If all of the outlier statistics pass, the position is available for navigation, proceed to Step 1 for next epoch. Otherwise, proceed to Step 6.

  • Step 6. Calculate the probability of correct identification for the largest outlier statistics. If the probability of correct identification is higher than its threshold, exclude this observation and proceed to Step 4. Otherwise, proceed to Step 7.

  • Step 7. Calculate the probability of wrong exclusion between the largest and the second largest outlier statistic. If the probability of wrong exclusion is higher than its threshold, both of them are excluded and proceed to Step 4. Otherwise, the position is unavailable for navigation; proceed to Step 1 for the next epoch.

Based on the above analysis, the traditional FDE procedure is still applied but we now add new criteria to estimate the probability of a missed detection and a wrong exclusion to improve the successful identification rate.

5. EXPERIMENTS AND ANALYSIS

The separability analysis theory described in this paper was applied to some GPS pseudorange data collected from Minot, North Dakota, USA on the 18th August 2008. The sample interval is 30 seconds, and the duration of the data is 24 hours. To compare classical FDE and the optimal FDE method proposed in this paper, an outlier of 1·5 times the MDB was added to the second pseudorange in each epoch. The parameters for the FDE procedure were α=1%, β=20% and the thresholds for the probability of successful identification and wrong exclusion were set to 1−β ii=80% and γ=3%.

For the classical FDE procedure, there are three types of judgments based on the global and the local tests. The judgment indicator for each epoch is shown in Figure 5. Indicator=0 indicates that the global test was passed and that no outlier exists; indicator=1 indicates that global test failed but the local test was passed, which means that the existence of an outlier was detected but the location cannot be identified; indicator=2 indicates that both global and local tests were rejected, so the outlier can be identified.

Figure 5. Indicator for data snooping procedure.

Figure 5 shows that although a fault is always added to each epoch there are still some epochs when the fault cannot be detected. In such circumstances both indicator=0 and indicator=1 signify a missed detection, and indicator=2 shows an identification. As the real location of the fault is known in this test, whether the identification is correct or not can be evaluated. The location of the assumed fault at each epoch is shown in Figure 6. This clearly shows that there is a great possibility of identifying the fault in the wrong satellite. Furthermore, wrong identification (or exclusion) will negatively influence the position accuracy, especially when the satellite geometry is weak or when there are no redundant measurements.

Figure 6. Fault location identified by FDE procedure.

The judgment indicator for the proposed optimal FDE procedure is shown in Figure 7. When applying the new method, there are two more criteria that should be applied for the local test. They are: the probability of successful identification should be greater than its threshold and the probability of wrong exclusion should be smaller than its threshold. Consequently, the results are more complex: 0 indicating that the global test was passed; 1 indicating that the global test was failed and the local test was passed; 2 indicating that the global test was failed and the local test was rejected with the two new criteria being satisfied (so it is assumed that the outlier can be identified); 3 indicating that although the global and local test were rejected the confidence level for correct identification was not satisfied, therefore the outlier cannot be identified; 4 indicating that although other criteria are satisfied, the probability of wrong exclusion is higher than its threshold (which implies an unacceptable risk of making a wrong exclusion).

Figure 7. Indicator for the optimal FDE procedure with wrong exclusion estimation.

Comparing Figure 7 with Figure 5 it can be seen that the results for indicator 0 and indicator 1 are the same for both procedures. However, the results for indicator 2 in Figure 5 are divided into three parts (indicator=2, 3 and 4) in Figure 7. This means that the optimal FDE procedure requires more restrictions on the identification so as to guarantee a successful rate of identification. Figure 8 shows the location of the fault that is identified by the optimal FDE procedure when the indicator=2. Compared with Figure 6, it is clear that the possibility of wrong exclusion is smaller, which means that with the stricter criteria the wrong exclusion can largely be separated from the identification.

Figure 8. Fault location identified by optimal FDE.

The corresponding probability of successful identification and probability of wrong exclusion are presented in Figures 9 and 10, respectively. In Figure 9, the red marks show that for many epochs, although the local test can identify the fault, the evaluated probability of successful identification is smaller than its threshold (80%). Consequently it is assumed that the identification is untrustworthy.

Figure 9. Probability of successful identification.

Figure 10. Probability of wrong exclusion.

Figure 10 shows the evaluated probability of a wrong exclusion that is greater than the threshold (3%) when the indicator equals 4. At many epochs the probability of a wrong exclusion is even greater than 20%, which means that the identification is untrustworthy and the position accuracy may be negatively influenced after exclusion.

The North, East and Vertical position errors for the different methods are shown in Figure 11. From the figure it can be seen that even the least-squares estimation results are much better than those of the classical FDE procedure, which means that the position accuracy is negatively influenced when the FDE procedure is applied. This is caused by the frequent occurrence of wrong exclusion since there is no criterion to check if a wrong exclusion has been committed. The red curves show that the optimal FDE procedure can improve the reliability and stability of the results.

Figure 11. Position errors (metres, outlier size: 1·5 MDB).

In Figure 12, the corresponding results for an outlier with a magnitude of 4 times the MDB are shown. Compared with Figure 11, it is clear that, with a larger outlier, the position accuracies for the different methods are all significantly reduced. The estimation accuracies from both the proposed and classical FDE procedures are much higher than those of the least-squares estimation. This shows that least-squares estimation is not optimal any more - in the sense of being unbiased and of minimal variance. Since the existence (and the magnitude) of a fault cannot be predicted beforehand, the applied FDE procedure must guarantee the stability and reliability of the results.

Figure 12. Position errors (metres, outlier size: 4 MDB).

6. CONCLUSIONS

This paper has studied the separability of two alternative hypotheses, including the relationships between different statistical parameters. The probabilities of making Type I, II, and III errors were found to be dependent on the correlation coefficients between the outlier statistics. The larger the correlation coefficient is, the larger the non-centrality parameter that is required to guarantee successful identification. This also means that it is much more difficult to successfully identify a faulty pseudorange that is highly correlated with other pseudorange measurements. In addition, a larger correlation coefficient also significantly increases the probability of making a wrong exclusion. Increasing the non-centrality parameter does not necessarily increase or decrease the probability of making a wrong exclusion. However, eventually a very large non-centrality parameter will decrease the probability of making a wrong exclusion.

In terms of Fault Detection and Exclusion (FDE) procedure, the results presented here can be used to determine the probability of a wrong exclusion. This entails simply inspection of a graphical presentation of, or calculating directly, the probability of a wrong exclusion as a function of the correlation coefficient. The probability of a wrong exclusion can then be used to determine if exclusion is to be trusted, or FDE is required to be reapplied after the removal of a pseudorange measurement.

ACKNOWLEDGEMENTS

The first author wishes to record her appreciation to the China Scholarship Council (CSC) for supporting her studies at the University of New South Wales, Australia.

References

REFERENCES

Baarda, W. (1967). Statistical concepts in geodesy. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 4, Delft, The Netherlands.Google Scholar
Baarda, W. (1968). A testing procedure for use in geodetic networks. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 5, Delft, The Netherlands.Google Scholar
Brown, R., Chin, G. (1997). GPS RAIM: Calculation of the threshold and protection radius using Chi-square methods - A geometric approach. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 155178.Google Scholar
Förstner, W. (1983). Reliability and discernibility of extended Gauss-Marko models, In: Mathematical Models of Geodetic/Photogrammetric Point Determination with Regard to Outliers and Systematic Errors, Deutsche Geodätische Kommission, Reihe A, No. 98, Munich, Germany.Google Scholar
Hawkins, D. M. (1980). Identification of outliers. Chapman and Hall, New York.CrossRefGoogle Scholar
Hekimoglu, S. (1997). Finite sample breakdown points of outlier detection procedures. Journal of Surveying Engineering, 125(1), 1531.CrossRefGoogle Scholar
Hewitson, S., Lee, H., Wang, J. (2004). Localizability analysis for GPS/Galileo receiver autonomous integrity monitoring. Journal of Navigation, 57, 245259.CrossRefGoogle Scholar
Hewitson, S., Wang, J. (2006). GNSS receiver autonomous integrity monitoring (RAIM) performance analysis. GPS Solutions, 10(3), 155170.CrossRefGoogle Scholar
Kelly, R. (1998). The linear model, RNP, and the near-optimum fault detection and exclusion algorithm. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 227260.Google Scholar
Koch, K. (1999). Parameter estimation and hypothesis testing in linear models, 2nd Edn. Springer, Berlin, Heidelberg, New York.CrossRefGoogle Scholar
Kok, J. (1984). On data snooping and multiple outlier testing. NOAA Technical Report, NOS NGS 30, U.S. Department of Commerce, Rockville, Maryland, USA.Google Scholar
Lee, Y. (1995). New techniques relating fault detection and exclusion performance to GPS primary means integrity requirements. In: ION GPS 1995, 12–15 September, Palm Springs, California, USA, 19291939.Google Scholar
Lee, Y., Van Dyke, K., Decleene, B., Studenny, J. and Beckmann, M. (1996). Summary of RTCA SC-159 GPS Integrity Working Group Activities. Navigation, 43(3), 195226.CrossRefGoogle Scholar
Lee, Y. C., Van Dyke, K. (2002). Analysis performed in support of the ad-hoc working group of RTCA SC-159 on RAIM/FDE Issues. In: ION NTM 2002, 28–30 January 2002, San Diego, California, USA, 639654.Google Scholar
Li, D. (1986). Trennbarkeit und Zuverlässigkeit bei zwei verschiedenen Alternativhypothesen im Gaub-Markoff-Modell. Z. Vermess. 3, 114128.Google Scholar
Ober, P. (2003). Integrity prediction and monitoring of navigation Systems. Integricom, Leiden, Netherlands.Google Scholar
Parkinson, B., Axelrad, P. (1988). Autonomous GPS integrity monitoring using the pseudorange residual, Navigation, 35(2), 255274.CrossRefGoogle Scholar
Pervan, B., Lawrence, D., Cohen, C. and Parkinson, B. (1996). Parity space methods for autonomous fault detection and exclusion using GPS carrier phase. In: PLANS 1996, 22–26 April 1996, Atlanta, Georgia, USA, 649656.Google Scholar
Pervan, B., Lawrence, D. and Parkinson, B. (1998) Autonomous fault detection and removal using GPS carrier phase. IEEE Trans. Aerosp. Electron. System, 34(3), 897906.CrossRefGoogle Scholar
Schaffrin, B. (1997). Reliability measures for correlated observations. Journal of Surveying Engineering, 123(3), 126137.CrossRefGoogle Scholar
Wang, J., Kubo, Y. (2008). GNSS Receiver Autonomous Integrity Monitoring. In: Sugimoto, S., Shibasaki, R.(eds), GPS Handbook, Asakura, Tokyo, Japan.Google Scholar
Figure 0

Table 1. Decisions when testing two alternative hypotheses.

Figure 1

Table 2. The relationship among different parameters.

Figure 2

Figure 1. Type II and III errors.

Figure 3

Figure 2. The sum probability of making errors βii=βi0+γij.

Figure 4

Figure 3. Type II error βi0.

Figure 5

Figure 4. Type III error γij.

Figure 6

Figure 5. Indicator for data snooping procedure.

Figure 7

Figure 6. Fault location identified by FDE procedure.

Figure 8

Figure 7. Indicator for the optimal FDE procedure with wrong exclusion estimation.

Figure 9

Figure 8. Fault location identified by optimal FDE.

Figure 10

Figure 9. Probability of successful identification.

Figure 11

Figure 10. Probability of wrong exclusion.

Figure 12

Figure 11. Position errors (metres, outlier size: 1·5 MDB).

Figure 13

Figure 12. Position errors (metres, outlier size: 4 MDB).