Brought to you by:
Paper

A new approach to UTC calculation by means of the Kalman filter

and

Published 17 August 2016 © 2016 BIPM & IOP Publishing Ltd
, , Focus on Time Scale Algorithms Citation Federica Parisi and Gianna Panfilo 2016 Metrologia 53 1185 DOI 10.1088/0026-1394/53/5/1185

0026-1394/53/5/1185

Abstract

In this paper a new approach to Coordinated Universal Time (UTC) calculation is presented by means of the Kalman filter. An ensemble of atomic clocks participating in UTC is selected for analyzing and testing the potentiality of this new method.

Export citation and abstract BibTeX RIS

Introduction

Coordinated Universal Time (UTC) is calculated each month at the BIPM by using more than 450 atomic clocks located world-wide. The algorithms are optimized for a time scale that is both accurate and stable over the long term.

The most important algorithms are the prediction, the weighting and the steering procedures. In this paper a new approach is tested to improve the performance of UTC. The new algorithm is based on the Kalman filter routine. The Kalman filter is used in many fields because of its ability to clean data from white phase noise. In time and frequency domain, use of the Kalman filter is especially important because of its use in building time scales. The short-term stability of UTC is dominated by the white noise of time transfer used to compare clocks; the idea is to improve UTC stability by means the Kalman filter. In this first stage we selected an ensemble of clocks that run continuously without time and frequency steps for 3 years. UTC is calculated with this ensemble of clocks. The Kalman-based time scale is calculated by using the weights obtained by auxiliary procedures related to the Kalman routine and the results are compared to those acquired by using the weights obtained from the UTC algorithm.

In the first section the motivation of the paper is presented, the second section introduces UTC computation together with the prediction and weighting algorithms. The third section deals with the Kalman procedure and the results obtained are reported in section 4.

1. Motivation of the paper

The UTC time scale has a unique role in time and frequency metrology: it is the international reference used to compare and study any other time scale ensemble. The algorithms developed for the calculation of UTC need to fulfil two major requirements: to guarantee the best metrological product and to play an international role by using the biggest possible number of clocks to spread traceability all over the world. Currently, the time scales showing better performances in terms of stability are based on a single or a small ensemble of Atomic Fountain Clocks [1, 2]. However, the algorithms to build time scales based on a large clock ensemble are still important for UTC (because of the traceability requirement), for real-time time scale [3] and for national laboratories and observatories not equipped of Atomic Fountain Clocks. In all these different scenarios the use of adapted mathematical tools are necessary to build an optimal product. In this paper we develop an algorithm for UTC based on Kalman filter techniques and we investigate the possible improvement it could bring to the time scale in terms of short and long term stability, still respecting the unique role of UTC as international reference. Innovative studies with application on the algorithms used for the computation of UTC will accelerate the progress in the field of time scales, both for improving the metrological quality of the reference and for supporting further research on the construction of time scales where UTC is the reference.

2. The calculation of UTC

The various time laboratories world-wide achieve a stable local time scale based on individual atomic clocks or a clock ensemble. The clock readings reported by these laboratories are then combined at the BIPM through an algorithm designed to optimize the frequency stability and accuracy. The BIPM Time Department uses an appropriate algorithm [46] to generate the international reference UTC each month. The calculation of UTC is carried out in three steps; the free atomic time scale EAL is computed as a weighted average of about 450 free-running atomic clocks distributed world-wide, the frequency of EAL is steered to maintain agreement with the definition of the SI second (the resulting time scale is TAI) and leap seconds are inserted to maintain agreement with the non-uniform time derived from the rotation of the Earth. The resulting time scale is UTC. UTC relies on the largest possible number of atomic clocks of different types, located in different parts of the world and connected via a network that allows precise time comparisons between remote sites. Each month the differences between the international time scale UTC and the local time scales UTC(k), maintained at the contributing time laboratories, are reported at 5 d intervals in an official document called BIPM Circular T [7]. The algorithm used to define EAL gives the difference between EAL and the reading hi of each participant clock Hi [8]:

Equation (1)

where N is the number of participating clocks, $\text{EAL}{{\text{W}}_{i}}$ is the relative weight of the clock Hi, hi(t) is the reading of the clock Hi at time t, and $h_{i}^{\prime}(t)$ is the predicted reading of the clock Hi that serves to guarantee the continuity of the time scale. The data used by the algorithm and reported in (1) take the form of the time differences between readings of clocks, denoted as:

Equation (2)

2.1. Prediction algorithm

Details concerning the clock frequency prediction indicated by term $h_{i}^{\prime}(t)$ and used in the calculation of UTC can be found in [5]. The prediction term is expressed in quadratic form to describe the frequency drift of the H-masers or the ageing of the caesium clocks. It includes three parameters: the phase, the frequency and the frequency drift:

  • the estimation of the time correction relative to EAL of the clock Hi at the date tk, added to avoid time jumps, is given by the last known difference between EAL and each clock
  • the estimation of the frequency of clock Hi, relative to EAL, predicted for the calculation period, to avoid frequency jumps, from a theoretical point of view is the difference with respect to EAL between the first and last clock data points for the interval
    Equation (3)
  • the estimation of the frequency drift of the clock Hi, relative to a frequency reference, predicted for the calculation period, under the hypothesis that this drift remains constant over a 1 month interval is given by using the frequency data ${{y}_{TT-{{h}_{i}}}}$ of the clock with respect to terrestrial time (TT) [9, 10]. The least-squares technique on 6 months of past data of ${{y}_{TT-{{h}_{i}}}}$ is used to evaluate the frequency drift for all kinds of clocks.

2.2. Weighting algorithm

The weighting strategy is based on the principle that a good clock is a predictable clock [6, 11]. The main idea is to analyze the difference between the frequency of the clocks yi and their prediction yip, obtained by using the relationship (3). In UTC calculation the upper limit of weight is set equal to $\text{EAL}{{\text{W}}_{\text{max}}}=4~{{\text{N}}^{-1}}$ to limit individual clock contributions. N is the number of the clocks used in the calculation. The weighting algorithm is a four-iteration process. The differences between the predicted and the real frequencies are evaluated for each 1 month interval over one year and these values are used to define the weight. Each iteration runs as follows:

  • 1.  
    The values $\left[\text{EAL}-{{h}_{i}}\right]$ are found using a given set of relative weights. In the first iteration, the weights are those obtained in the previous computation interval after normalization. In the following iterations, the weights are obtained from the previous iteration;
  • 2.  
    The absolute value of the difference between the frequency yi, j as defined in (3) and the predicted frequency yip, j evaluated on the calculation interval j obtained by the relationship corresponds to:
    Equation (4)
    where the index i identifies the clock;
  • 3.  
    One year of ${{\epsilon}_{i,j}}$ is considered to ensure long-term stability of EAL and UTC, j characterizes the calculation interval.
  • 4.  
    A filter has been implemented to give a more predominant role to more recent measurements with respect to older ones, considering that new measurements have the most reliable statistics:
    Equation (5)
    where i identifies the clock, j the calculation interval and M the number of available measurements.
  • 5.  
    The relative weight of clock Hi is computed theoretically using a temporary value given by
    Equation (6)

The new weight $\text{EAL}{{\text{W}}_{i}}$ of clock ${{\text{H}}_{i}}$ is equal to $\text{EAL}{{\text{W}}_{i,\text{temp}}}$ except in two cases:

  • 1.  
    Clock Hi exceeds the limit value set for the limitation of weight as for the current algorithm.
  • 2.  
    Clock Hi shows abnormal behaviour during the interval of computation so it cannot contribute. In this case the current value of the difference between the real frequency and the predicted frequency is checked. If the value is greater than 5 ns d−1 the clock is temporarily excluded from the ensemble.

3. The Kalman filter routine

The Kalman filter is a linear iterative procedure that has as an output the estimate of a signal affected by random noise. First developed by Kalman in a pioneering article published in 1960 [12], this method has been growing in importance in various fields of application over the last 50 years.

When building a Kalman filter routine, it is important to understand properly which information will be needed, and in what form. We require:

  • a state vector $\mathbf{X}(t)$ and his state-space equation $\mathbf{X}(t)=$ $\boldsymbol{}{{\mathbf{\Phi}}_{t,t-\tau}}\mathbf{X}(t-\tau )+\mathbf{W}(t)$ ;
  • knowledge of the noises affecting the system, whose presence is accounted for the random vector $\mathbf{W}(t)$ . $\mathbf{W}(t)$ is a three dimensional vector, whose components are independent Brownian motion processes;
  • an initial estimate of the state vector, $\mathbf{\hat{X}}\left({{t}_{0}}\right)$ , and its covariance matrix $\boldsymbol{\Gamma}\left({{t}_{0}}\right)$ ;
  • true measurements $\mathbf{Z}(t)$ of the observable variables, that we suppose to depend linearly on the state vector, and possibly affected by noise.

3.1. Algorithm

Once the necessary information has been obtained, the Kalman filter is constituted of two parts: an a priori estimation is made based solely on the knowledge we have of the system. Then, this estimation is updated to an a posteriori estimation using the additional information provided by the new set of measurement that are now available. Let us have a closer look at the procedure. For every time instant tk, we proceed as follows:

  • STEP 1: PredictionStarting from the given initial estimate $\mathbf{\hat{X}}\left({{t}_{0}}\right)$ , for a set of time instants $\left\{{{t}_{k}}\right\}_{k=1}^{n}$ equally spaced at distance τ3, we compute
    Equation (7)
    where the notation means that the estimate is computed at time tk using all the information up to (and including) the instant tk−1. The covariance matrix for the estimation error is as well computed starting from $\boldsymbol{\Gamma}\left({{t}_{0}}\right)$ as
    Equation (8)
    where Q is the covariance matrix of the noise vector $\mathbf{W}$ .
  • STEP 2: MeasureAt each time step tk, the measurement vector can be expressed as $\mathbf{Z}\left({{t}_{k}}\right)=\mathbf{HX}\left({{t}_{k}}\right)+\mathbf{v}\left({{t}_{k}}\right)$ , where $\mathbf{v}$ represents the measurement noise4 and $\mathbf{H}$ is the transformation matrix. We assume that the measurement noise $\mathbf{v}$ is a zero-mean Gaussian noise with covariance matrix $\mathbf{R}\left({{t}_{k}}\right)$ .We then define a variable, called innovation, that computes the difference between the vector of true measurements $\mathbf{Z}$ and the estimated one.
    Equation (9)
    whose covariance matrix is given by
    Equation (10)
    It is now possible to define the Kalman gain matrix as
    Equation (11)
  • STEP 3: Estimation updateThe estimations are now updated using the innovation and the Kalman gain in the following way
    Equation (12)
    of covariance matrix
    Equation (13)
    Now the estimation is made using all the information up to the instant tk.

3.2. Kalman filter for atomic clocks

In the case of an ensemble of N atomic clocks, the state vector will contain the information of phase, frequency and drift offsets for each clock in the ensemble. In this work we chose to order the state vector by first locating all the phase offsets, then all the frequencies and then the drifts, having

Equation (14)

The resulting $\boldsymbol{\Phi}$ matrix is a block matrix

Equation (15)

where ${{\mathbf{I}}_{N}}$ is the $N\times N$ identity matrix, and $\mathbf{O}$ is a null matrix of the same dimension.

For each clock i, the noise vector ${{\mathbf{W}}^{i}}(t)$ is a three dimensional vector whose components are independent Brownian motion. Thus we have ${{\mathbf{W}}^{i}}(t)\sim N\left(\mathbf{0},{{\mathbf{Q}}^{i}}\right)$ , where the noise covariance matrix for the ith clock is given by

Equation (16)

with ${{\sigma}_{j,i}}$ , j  =  1, 2, 3, representing the diffusion coefficients of the three components of ${{\mathbf{W}}^{i}}(t)$ 5. We have the noise covariance matrix for the whole system is given by the $3N\times 3N$

Equation (17)

where

If we assume that the reference clock of the ensemble is located in the first position, then the measurement matrix $\mathbf{H}$ has to transform $\mathbf{X}(t)$ from (14) into a column vector of differences $\mathbf{Z}(t)={{\left[{{x}_{2}}(t)-{{x}_{1}}(t),\cdots,{{x}_{N}}(t)-{{x}_{1}}(t)\right]}^{\prime}}$ of dimension N  −  1. $\mathbf{H}$ would then be the $(N-1)\times 3N$ matrix made of a column of  −1s, a $(N-1)\times (N-1)$ identity matrix, and zeros at the remaining entries.

Note that if we include measurement noise v(t) in the model, we assume it to be uncorrelated with all the components of the clock noise vector W(t).

3.3. Covariance x-reduction

One of the major problems we have to face when dealing with Kalman estimation is the uncontrolled growth of the estimation error. Let us recall that due to the ordering (14), the covariance matrix of the estimation error, $\rm{\hat \Gamma }(t)$ , is a ($3N\times 3N$ ) block matrix where the first diagonal block (${{\rm{\hat \Gamma }}_{xx}}$ ) contains the estimation error on the clocks phase components, the second block (${{\rm{\hat \Gamma }}_{yy}}$ ) on the frequency and the third (${{\rm{\hat \Gamma }}_{zz}}$ ) on the drift components. The off-diagonal blocks contain the interaction terms. While the yz-submatrix of the error covariance matrix $\rm{\hat \Gamma }(t)$ , relative to frequency and drift estimates, has been noted to be empirically well behaved [14], the xx-submatrix grows without limits. This is mainly due to the fact that the Kalman filter has to estimate the behaviour of N variables having as an input a vector of dimension N  −  1, the $\mathbf{Z}$ measurements. To overcome this problem, Greenhall proposed an approach, known as Covariance x-reduction, that can be proved to be transparent with respect to Kalman estimates of frequency and drift [14, 15]. This procedure is carried out in a different way for the two cases of noiseless or noisy measurements. We define

Equation (18)

where $\hat{x}(t)$ represents the Kalman phase offset estimate and $\tilde{x}(t)$ is the correction applied to x(t) on the base of Kalman prediction. We define the ${{\tilde{x}}_{i}}$ centroid ${{\bar{x}}_{0}}$

Equation (19)

Equation (20)

where ${{\mathbf{1}}_{N}}$ is a N-dimensional column vector of 1s, ${{\mathbf{I}}_{N}}$ the ($N\times N$ ) identity matrix and KW a vector of weighs to which we will refer as the Kalman weights. As (20) expresses the error of representation of $\tilde{x}$ by means of ${{\bar{x}}_{0}}$ , we would like to determine the centroid in such a way that this error is minimized. This can be accomplished finding the weights $\text{K}{{\text{W}}_{i}}$ , i  =  1,..., N, that minimize the quantity between brackets in (20). In order to obtain the required weights we solve the system

Equation (21)

If we define

Equation (22)

the covariance x-reduction operation in the case of noisy measurements consists of the following transformation

Equation (23)

For the derivation of (21) and further detail on methods and properties see [15].

3.4. Kalman filter for time scales

We recall the definition of clock reading ${{h}_{i}}(t)={{h}_{0}}(t)-{{x}_{i}}(t)$ , where h0 denotes an ideal clock. Our goal would be to estimate h0 by means of the measurements

Equation (24)

Having at our disposal an estimate for the xi, ${{\hat{x}}_{i}}$ (in our case the output of a Kalman filter), we define the corrected clocks as

Equation (25)

Using the definition of clock reading and (18) we have

Equation (26)

In the case of noiseless measures all the ${{\tilde{x}}_{i}}$ are equal and therefore all the corrected clocks agree. In the presence of measurement noise, however, the corrected clocks do not agree, but they tend nevertheless to form a cluster. In the sight of (26), we choose to take the centroid of the cluster as the least square estimate of h0 given the ${{\tilde{h}}_{i}}$ . We define

Equation (27)

Using (19) and once again the definition of clock reading we get

Equation (28)

The quantity (27) defines a new time scale, the Kalman filter time scale (KF), where the vector ${{\mathbf{w}}_{g}}$ represents a generic weighting vector. If we define the representation error

Equation (29)

that expresses how well each clock represents KF, we have the following result.

Theorem 1. The weight vector that, used to compute KF, minimizes the representation error is KW.

The reader can find the proof of this theorem in [15].

4. UTC calculation by means the Kalman Filter

In order to evaluate the possibility of using a Kalman filter procedure for UTC, we started from simulated prototypes of time scales generated according to the procedure presented in section 3.4, referring to the literature, in particular to [14] and [15].

4.1. Simulation

The first step we took in order to build a good simulation was to generate the clock data. This has been done according to the mathematical model presented in [13, 16]. The solution to the system of SDE can be expressed in the iterative form

Equation (30)

where $\mathbf{X}\left(\mathbf{t}\right)=\left[{{X}_{1}}(t),{{X}_{2}}(t),{{X}_{3}}(t)\right]$ . X1(t) represents the phase offset of the modelled clock with respect to a reference clock, X2(t) is part of the clock frequency deviation, while X3(t) is part of the frequency drift. $\boldsymbol{}{{\mathbf{\Phi}}_{{{t}_{k+1}},{{t}_{k}}}}$ has the form (15) and $\mathbf{W}\left({{t}_{k}}\right)$ is the vector of clock noises whose covariance matrix, $\mathbf{Q}$ , is the $3N\times 3N$ matrix defined in (17). The ${{\sigma}_{i}}$ , the diffusion coefficients of the noise processes, express the amount of white noise on the frequency component, random walk noise on frequency and to a perturbation on the frequency drift.

Once generating the clock signals, we compute the differences (24) that will correspond to the measurements. All the differences will be taken with respect to a clock used as the reference. The measurement noise will then be inserted by means of the R covariance matrix introduced in section 3.1. The time scale itself is then computed applying (28).

We run different simulations using different parameter specifications for the clocks and measurement noises. At first we simulated a set of four clocks, two having characteristics resembling the ones of Caesium clocks (white frequency noise (WFN) at the level of about $3\times {{10}^{-14}}$ at 5 d and random walk frequency noise (RWFN) of $1\times {{10}^{-16}}$ at 5 d), and two modelling Maser clocks (WFN at the level of about $1\times {{10}^{-15}}$ at 5 d and RWFN of $1\times {{10}^{-15}}$ at 5 d). The resulting scale stability with respect to the clock's stability is shown in figure 1. We can observe that the scale is able to achieve a better stability than any clock of the ensemble both in the short and long term.

Figure 1.

Figure 1. Stability comparison between a simulated ensemble of four clocks, two masers (circle markers) and two caesiums (square markers), and the resulting Kalman filter scale (solid black line).

Standard image High-resolution image

Another simulation was run substituting the Maser-like clocks with Rubidium fountains (figure 2). Unlike the previous case, the KF Scale does not appear to have a better stability than the Rubidium fountains, although it does show a very good performance. This is because in presence of time transfer noise with atomic clocks characterized by very different stabilities (in the previous example caesium clocks and Rb fountains) the Kalman Filter solution does not seem very stable in cleaning the white phase noise. If we run the same simulation but without introducing measurement noise, KF is actually showing a better performance than any clock, including the Rb fountains. The amount of time transfer noise used in the simulations reported in figure 2 is equal to 15 ns. In conclusion when dealing with high precision devices, even a small amount of measurement noise spoils the received signal in such away that the stability of the corresponding time scale will be prevented from achieving stability results comparable to those of the devices themselves [17].

Figure 2.

Figure 2. Stability comparison between a simulated ensemble of four clocks, two Rubidium fountains (circle markers) and two ceasiums (square markers), and the resulting Kalman filter scale (solid black line).

Standard image High-resolution image

4.2. Real data: atomic clocks from the UTC ensemble

Unobservability. The first thing to be underlined when passing from simulation to real data is that some of the quantities used to define the KF Scale are not observable outside the simulation. In particular, one does not have access to the single clock readings, but just to the differences between the clock data and the data from a reference (generally a better performing clock or a time scale). As a consequence, we will not be able to compute the KF Scale itself, but just the differences between the scale and the clocks of the ensemble. Therefore, instead of (27) we will compute

Equation (31)

Measurement noise correlation. A second remark is due regarding the covariance matrix of the measurement error R. Although formally defined as a full matrix, in literature, and particularly in simulation, this matrix is generally presented as diagonal. This implies the assumption that the noise components affecting two different measures are uncorrelated. In the case of UTC this condition does not hold. In fact, the uncertainties of the link between laboratories are not computed directly, but the system relies on a pivot laboratory, the PTB (Physikalisch-Technische Bundesanstalt), Germany. The link uncertainty between each laboratory and the PTB is estimated and then all the other inter-laboratory uncertainties are computed by triangulation. This procedure originates a correlation between measurement errors, and therefore the R matrix can no longer be considered diagonal.

4.2.1. A Kalman based UTC prototype.

In order to test the potentiality of the presented Kalman filter Scale in the UTC framework, we built a prototype of the scale. In order to deal with one problem at a time, we put ourselves in a favourable condition selecting a subset of the UTC participating clocks such that:

  • every clock was present for the whole considered time interval (≃3 years from 2012 to 2014);
  • the clocks were not affected by any kind of phase or frequency jump.

The resulting ensemble consisted of 139 clocks from 19 different laboratories. In this section we will present the computation of three different time scales:

  • $\text{EA}{{\text{L}}_{139}}$ : EAL computed on the restricted sample of clocks, with maximum weight set to 2.5 N−1, suitable in the case of a smaller sample;
  • $\text{K}{{\text{F}}_{\text{KW}}}$ : the Kalman filter time scale with Kalman weights;
  • $\text{K}{{\text{F}}_{\text{EALW}}}$ : the Kalman filter time scale with the weights resulting from the procedure described in section 2.2 applied to the 139 clocks sample.

To assess the quality of $\text{EA}{{\text{L}}_{139}}$ , the reduced EAL has been compared to ensemble of PFS used to steer the frequency of EAL. In figure 3 the frequency of EAL calculated with about 450 (red line) and EAL with 139 (blue line) atomic clocks are compared to the frequency of PFS. In figure 4 the Allan variance of the previous data is reported. From this result is clear that $\text{EA}{{\text{L}}_{139}}$ is less stable that the current time scale but the quality is sufficiently good to be used. We should observe that the parameters (frequency of the clocks, frequency drift etc) are optimized for having a good long term time scale in the case of 450 atomic clocks. Different choices should be done in the case of about 150 atomic clocks. However the results are good enough for the scope of our first analysis. The input data for our computation was therefore of the form $\text{EA}{{\text{L}}_{139}}-{{h}_{i}}$ with i  =  1,..., 139. From that, we estimated the clock parameters (initial states and noise coefficients) attributing the whole noise signature to the hi. All these clock parameters, obtained by the calculation of $\text{EA}{{\text{L}}_{139}}$ , are used to initialize the Kalman filter algorithm. This is the main idea of the paper, a mixed algorithm taking advantage of both algorithms. The noise coefficients are calculated by comparing the clocks to TT(BIPM) and by performing the Hadamard variance. Figure 5 shows the data used for the statistical analysis of the clock with respect to $\text{EA}{{\text{L}}_{139}}$ (phase offset of USNO(Rb)-$\text{EA}{{\text{L}}_{139}}$ ) and TT(BIPM) (normalized frequency and Hadamard variance of USNO(Rb)-TT(BIPM)). The phase data with respect to $\text{EA}{{\text{L}}_{139}}-{{h}_{i}}$ are used for the phase estimation, TT(BIPM) is the best frequency reference used for the estimation of the frequency drift and for the noise estimation. In the plot of the Hadamard variance are reported the typical behaviour of the WFN (blue line) and RWFN (green line) used for the noise estimation. In the case of data of Rubidium fountain given by the USNO (United States Naval Observatory) [18], as the extremely high stability (both long and short term) the three cornered hat is used in order to evaluate the order of magnitude of the noise. Considering its excellent metrological properties, the comparison between the stability of $\text{EAL}-{{h}_{i}}$ and $\text{KF}-{{h}_{i}}$ when hi is a fountain is of particular interest. In this case, having the clock with a better stability than the time scales itself, the plot actually shows the comparison between the stabilities of the atomic time scales. The noise estimation is the critical aspect when a time scale is built by means the Kalman filter. The solution can be completely erroneous only due to bad estimations. This aspect is of particular relevance in the case of Rubidium fountains due to their excellent performance. An adaptive algorithm should be studied that automatically adjusts the parameters as required. Several tests have already been carried out in this direction by Greenhall and Davis [19].

Figure 3.

Figure 3. Frequency of EAL calculated with about 450 and EAL with 139 atomic clocks are compared to the frequency of PFS.

Standard image High-resolution image
Figure 4.

Figure 4. Allan variance of the previous data.

Standard image High-resolution image
Figure 5.

Figure 5. Clock data used for the statistical analysis of the clock with respect to $\text{EA}{{\text{L}}_{139}}$ (phase) and TT(BIPM) (normalized frequency and Hadamard variance) for the estimation parameters.

Standard image High-resolution image

Before completing the time scales comparison, there is another aspect on which we need to focus our attention: the clock weights. The x-reduction procedure in the case of measurement noise delivers, together with Kalman's estimates, a weight $\text{K}{{\text{W}}_{i}}$ for each laboratory, based solely on the clock stability. However this kind of weight does not suit our purposes as it does not introduce a maximum for each clock weight (thus affecting the reliability and the traceability of the scale). In this case the weights are completely dominated by the Rubidium fountains one, as their stability is much better of those of the other clocks. Therefore we use the EAL weights (EALW) for our computation, based, and previously described, on the principle that a good clock is a predictable clock. Figure 6 shows the stability comparison between $\text{EA}{{\text{L}}_{139}}$ , the $\text{K}{{\text{F}}_{\text{KW}}}$ and the $\text{K}{{\text{F}}_{\text{EALW}}}$ , all with respect to a Rubidium fountain. We can observe that the KF Scales show an improved stability with respect to EAL, particularly in the case of $\text{K}{{\text{F}}_{\text{KW}}}$ . This is not surprising as not having a maximum weight constraint implies that the scale can rely only on the better performing clocks. This fact, even it allows a better stability, makes the scale much less robust to possible problems affecting these high performance clocks.

Figure 6.

Figure 6. Stability comparison between $\text{EA}{{\text{L}}_{139}}$ (higher line), $\text{K}{{\text{F}}_{\text{EALW}}}$ (middle line) and the $\text{K}{{\text{F}}_{\text{KW}}}$ (bottom line). with respect to the USNO 1930005 fountain.

Standard image High-resolution image

5. Conclusion

In this paper we presented the use of the Kalman filter applied to a small group of clocks used in the calculation of UTC. We used 3 years of data from a reduced number of clocks that were not affected by time and frequency steps. In this test we calculated the weights with the UTC algorithm in order to compare the prediction component performance. In such a way we can test, in terms of stability, the difference between the UTC and KF time scales due to the prediction component only. The results are very promising and encourage a continuation in this direction. An automatic detection of the time and frequency steps, an adapted weight algorithm and the estimation of the noises and deterministic parameters are necessary components needed to build an international time scale like UTC. An automatic procedure is also necessary without external intervention. A robust algorithm for the estimation of parameters is a very important development for this kind of ensemble time scale.

Acknowledgment

The authors would like to thank E F Arias and C Zucca for the helpful discussions. A special thanks to C Greenhall for sharing his experience in this domain.

Footnotes

  • The time steps do not need to be equally spaced, however this assumption simplifies the treatise, without any loss of generality.

  • For a simpler model it is possible to omit this term and assume there is no measurement noise.

  • For more detailed explanation and derivation of the model see [13].

Please wait… references are loading.