Table of contents

Volume 43

Number 1, February 2006

Previous issue Next issue

SHORT COMMUNICATION

L1

and

In this short communication we compare En values taking into account the covariance with the corresponding 'exclusive' statistic in an interlaboratory study framework. The conclusion from this analysis is that there are no differences between the En values accounting for covariance and the corresponding 'exclusive' statistics.

LETTER TO THE EDITOR

L3

The Planck constant, the Avogadro constant and the molar mass of carbon-12 satisfy a compatibility condition: the product of the first two quantities is proportional to the third. Independently fixing any two of these quantities at exact values creates invariant redefinitions of the SI base units for both mass and amount of substance. The third quantity is then determined by the compatibility condition. Of the three possible alternatives, the strategy of fixing the Planck constant to give an invariant redefinition of the base unit for mass together with fixing the Avogadro constant to give an invariant redefinition of the base unit for amount of substance, while necessarily relaxing the exactness constraint on the molar mass of carbon-12, has a number of positive attributes that would seem to make it worth considering along with the other two strategies that each fix the molar mass of carbon-12.

PAPERS

1

In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch–Satterthwaite (W–S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W–S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W–S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens–Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W–S formula with respect to the Behrens–Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

12

If measurement uncertainty is to be expressed by forming a distribution of belief for the measurand then this should be reflected in the theory underlying the analysis of interlaboratory comparison data. This paper presents a corresponding method for the calculation of a reference value in a comparison where each laboratory independently assigns a probability density function, fi(x), to the value X of a stable artefact. Straightforward argument shows that a consensus density function for X can be taken as , assuming that the densities of the n laboratories are reliable and mutually consistent. A method is also presented for examining this consistency. The result f(x) is a special case of a consensus in the statistical literature called the logarithmic opinion pool. The key comparison reference value might be taken as the mean, median or mode of f(x).

The method developed does not explicitly involve laboratory biases (offsets), which are important features of most methods for comparison analysis published recently and which are relevant to the calculation of degrees of equivalence. The belief approach does not seem well-suited when it is assumed that such offsets exist.

21

, , and

A statistical analysis for key comparisons with linear trends and multiple artefacts is proposed. This is an extension of a previous paper for a single artefact. The approach has the advantage that it is consistent with the no-trend case. The uncertainties for the key comparison reference value and the degrees of equivalence are also provided. As an example, the approach is applied to key comparison CCEM–K2.

27

Annex H.3 of the Guide to the Expression of Uncertainty in Measurement presents an example of calibration of a thermometer using a linear regression model. Annex H.5 of the same publication presents another class of linear statistical models and analysis techniques that are commonly called the analysis of variance (ANOVA). The procedures given in Annexes H.3 and H.5 do not specifically include Type B uncertainty evaluations. A natural question then is as follows: can these statistical models be used when the measurements are subject to additional uncertainty, arising from systematic effects, that needs to be evaluated by Type B methods? This paper answers the question in the affirmative and provides a natural interpretation of the results. The data from the two Annexes are used for illustration.

34

and

In this paper we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions of non-linear measurement equations. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared with the standard GUM (US Guide to the Expression of Uncertainty in Measurement, American National Institute of Standards, ANSI/NCSL Z540-2-1997) approach for finite samples using simple linear and non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived coverage intervals for the mean of the measurand. Proposed Supplement 1 to the GUM (Guide to the Expression of Uncertainty in Measurement, Supplement 1, currently under review to the member organizations of the JCGM and the national measurement institutes (draft 3rd edn)) outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The Supplement's approach assumes that the distributions of the random inputs are known exactly. In practice, however, the distributions of the inputs will rarely be known exactly but must be estimated using possibly small samples. The two-stage Monte Carlo approach will be compared with the GUM Supplement's propagation of distribution approach for non-linear measurement equations. We will show that the GUM Supplement's approach underestimates uncertainty in the measurand if distributions estimated with small sample sizes are assumed to be known exactly.

42

The paper presents an analytical method for calculating a coverage interval. The method consists in approximation of the convolution of standard distributions attributed to input quantity values, such as Student's, normal, rectangular, triangular or trapezoidal. It may be applied when the model of a measurand is linear or close to linear.

46

For a new determination of the Avogadro constant to play a role in the redefinition of the mass unit, the BIPM and other laboratories must carry out mass comparisons at a very high level of accuracy. We must achieve mass comparison among a 1 kg 28Si sphere and 1 kg Pt/Ir mass standards to a combined standard uncertainty of 4 µg. The BIPM has developed a method to reach the target requested by the Avogadro group. The goal of this paper is to report the advances at the BIPM and to describe our new method for linking between weighings in air and in vacuum. The results obtained by classical comparisons in air are compared with those obtained by the new BIPM method.

53

, , , and

NIST has characterized two large diameter (35.8 mm) piston/cylinder assemblies as primary pressure standards in the range 0.05 MPa to 1.0 MPa with uncertainties approaching the best mercury manometers. The realizations of the artefacts as primary standards are based on the dimensional characterization of the piston and cylinder, and models of the normal and shear forces on the base and flanks of the piston. We have studied two piston/cylinder assemblies, known at the National Institute of Standards and Technology (NIST) as PG 38 and PG 39, using these methods. The piston and cylinder of both assemblies were accurately dimensioned by Physikalisch Technische Bundesanstalt (PTB). All artefacts appeared to be round within ±30 nm and straight within ±100 nm over a substantial fraction of their heights. PG 39 was dimensioned a second time by PTB, three years after the initial measurement, and showed no significant change in dimensions or effective area. Comparisons of the effective area of PG 38 and PG 39 from dimensional measurements, against those obtained with calibration against the NIST ultrasonic interferometer manometer (UIM), are in agreement within the combined standard (k = 1) uncertainty of the dimensional measurements and the UIM. A cross-float comparison of PG 38 versus PG 39 also agreed with the dimensional characterization within their combined standard uncertainties and with the UIM calibrations. The expanded (k = 2) relative uncertainty of the effective area is about 6.0 × 10−6 for both assemblies.

60

A series of experiments were carried out in order to observe the melting and the freezing behaviour of Al–Si eutectic samples under static (adiabatic) and dynamic (steady heating or cooling) conditions. Afterwards the samples were analysed by glow discharge mass spectrometry. Subsequently they were doped with the detected major impurities of Ag, Cu and Fe to find out by how much these impurities shift the melting curves. The following results were obtained: the equilibrium melting temperature at the run-off point was found to be (758.792 ± 0.003) °C and the above impurities shifted this value by 0.12 mK ppm−1, 0.20 mK ppm−1 and 0.22 mK ppm−1 by weight, respectively.

67

, , and

Eutectic cells of Co/C and Ni/C for use in thermocouple calibration were manufactured and tested to investigate their melting and freezing characteristics using type B thermocouples. It was observed that the melting and freezing behaviour of Co/C and Ni/C systems are very similar. The freezing plateaus were found to be flatter than those of melting, but the melting points were closer to the ideal transition temperatures, which is an extrapolated value to zero temperature difference from the set temperature to induce melting/freezing, than the freezing points. Based on the observed results, the melting process is recommended for use when calibrating thermocouples.

71

and

The sum of individual estimates (SIE) and the overall maximum estimate (OME) are two methods recommended to estimate the influence of impurities on the temperatures of the liquid–solid phase transformations of high-purity substances. The methods are discussed starting with the basic crystallographic facts, and their application is demonstrated in detail by applying them to the freezing point of tin as a first example. The SIE method leads to a temperature correction with a corresponding uncertainty while the OME method yields only an uncertainty that is, perhaps not unexpectedly, larger than that of the SIE approach. The necessary sensitivity coefficients (derivatives of the liquidus lines) are tabulated, together with the equilibrium distribution coefficients. Other than the necessity of obtaining a complete elemental analysis of the fixed-point material using glow discharge mass spectrometry (or other suitable techniques), there remain no technical barriers to adopting the preferred SIE method. While the use of the method, and particularly the application of a temperature correction to account for the impurity influence, requires a paradigm shift within the thermometry community, improved interoperability and harmonization of approach are highly desirable goals. The SIE approach maximizes the application of scientific knowledge and represents the best chance of achieving these common goals.

84

and

A newly developed low pressure detection technology using carbon nanotube (CNT) field emission effect has been designed and manufactured. The fabricated pressure sensor is of a triode type, consisting of a cathode (CNTs field emitter arrays), a grid and a collector. The principle described here is that, for a constant number of electrons available for ionization emitted from CNT arrays by a grid potential, a constant fraction of gas molecules will be ionized and the number of ions collected in a collector will be proportional to the number of gas molecules in the chamber traversed by the electrons. Due to the excellent field emission characteristics of CNT, it is possible to make a cost-effective cold cathode type ion gauge. A screen-printing method has been used to make the CNT cathode. A glass grid with Cr deposited by an e-beam has been put on the cathode with a gap of 200 µm between two electrodes. Due to the voltage applied to the grid, the electrons emitted from the CNTs ionize gas molecules in the chamber and the ionized molecules are gathered to the collector. Then, the collector voltage is made lower than the grid voltage to obtain a large ionization ratio. The current detected in the collector is proportional to the pressure in the chamber. The ionization characteristics are dependent on the gas and the voltage applied to the grid and the collector. In this paper we will show the various metrological characteristics of the home-made pressure sensor utilizing CNTs.

89

and

Pair-difference chi-squared statistics are useful for analysing metrological consistency within a Key Comparison. We show how they relate to classical chi-squareds and how they can be used with full rigour, for any comparison of a scalar measurand, to compare the observed dispersion of results with the dispersion that would be expected on the basis of the claimed uncertainties. In several limits, the distributions of these pair-difference statistics are exact chi-squareds. For other cases, the Monte Carlo method can evaluate the distributions even in the presence of non-Gaussian uncertainty distributions, including the Student distribution to be construed when a participant has reported a degrees of freedom. Monte Carlo methods also treat inter-laboratory covariances in a transparent manner appropriate for metrology. Pair-difference chi-squared statistics are independent of the choice of a Key Comparison Reference Value (KCRV) and so may expedite the process of analysis, consensus building and publication for Key Comparisons. They are appropriate for judging pair metrology. We discuss them as a necessary, but not sufficient, test for a Key Comparison reported in the conventional way using a KCRV. The deficiencies of using solely the classical chi-squared test are discussed. A good remedy is available with pair-difference chi-squared statistics.

98

, and

Within the framework of the realization of a moist gas generator, a two-dimensional mass and heat transfer model has been established and then resolved. The impact of parameters such as flow rate of the gas and dew point temperature of the incoming gas was assessed. The realized moist gas generator was tested in a comparison made by means of a reference hygrometer.

106

, , , , , and

The absolute frequency of the f-component for the UME laser was measured with the BIPM femtosecond laser comb to be (473 612 353 602.0 ± 1.1) kHz during 3 March to 8 March, 2003 and with the UME comb to be (473 612 353 600.6 ± 1.1) kHz from 20 June to 25 June, 2005. After appropriate correction for power and modulation shifts a subkilohertz agreement between the two measurements was found. Moreover, the relative frequency stability for different UME–BIPM laser pairs is investigated using standard heterodyne techniques and Allan variance statistics.

109

, , , , , , , , , et al

Istituto Elettrotecnico Nazionale Galileo Ferraris (IEN), National Institute of Standards and Technology (NIST), National Physical Laboratory (NPL), Laboratoire National de Métrologie et d'Essais—Observatoire de Paris/Systèmes de Référence Temps Espace (OP) and Physikalisch-Technische Bundesanstalt (PTB) operate cold-atom based primary frequency standards which are capable of realizing the SI second with a relative uncertainty of 1 × 10−15 or even below. These institutes performed an intense comparison campaign of selected frequency references maintained in their laboratories during about 25 days in October/November 2004. Active hydrogen maser reference standards served as frequency references for the institutes' fountain frequency standards. Three techniques of frequency (and time) comparisons were employed. Two-way satellite time and frequency transfer (TWSTFT) was performed in an intensified measurement schedule of 12 equally spaced measurements per day. The data of dual-frequency geodetic Global Positioning System (GPS) receivers were processed to yield an ionosphere-free linear combination of the code observations from both GPS frequencies, typically referred to as GPS TAI P3 analysis. Last but not least, the same GPS raw data were separately processed, allowing GPS carrier-phase (GPS CP) based frequency comparisons to be made. These showed the lowest relative frequency instability at short averaging times of all the methods. The instability was at the level of 1 part in 1015 at one-day averaging time using TWSTFT and GPS CP. The GPS TAI P3 analysis is capable of giving a similar quality of data after averaging over two days or longer. All techniques provided the same mean frequency difference between the standards involved within the 1σ measurement uncertainty of a few parts in 1016. The frequency differences between the three fountains of IEN (IEN-CsF1), NPL (NPL-CsF1) and OP (OP-FO2) were evaluated. Differences lower than the 1σ measurement uncertainty were observed between NPL and OP, whereas the IEN fountain deviated by about 2σ from the other two fountains.

121

and

The effects of temperature variation on the timebase errors and impulse responses of two 50 GHz bandwidth sampling oscilloscopes and on the pulse parameters of two pulse generators commonly used for oscilloscope calibrations are reported. The observed variations are significant for high accuracy measurements and contribute to the uncertainty of any measurements performed.

129

and

Standard definitions of relative humidity ψ of a gas at temperature t and pressure P involve the ratio of some humidity quantity to the same quantity when the gas is saturated at the same temperature and pressure. These definitions do not apply under certain conditions, notably when the gas is pure unsaturated water vapour, and more generally when the theoretical saturation vapour pressure is greater than the actual pressure. The use of impedance-based relative humidity sensors to measure humidity under such conditions is increasingly common and consequently other less well defined definitions may be used over which there is some debate and confusion in the metrological community. In this paper we identify the source of the problem, explore possible definitions and champion a simple and useful definition of relative humidity which extends the traditional definition to cover the whole range.

135

, , , and

We have improved the accuracy in source-based calibration of radiative heat flux sensors by considering the temperature non-uniformity of the blackbody cavity. The method measures the responsivity of a heat flux sensor as a function of distance from the blackbody aperture. From this variation of the responsivity, the temperature distribution and the effective emissivity of the blackbody can be determined via a Monte Carlo simulation. The calibration uncertainty is evaluated to be 2.3% (k = 2) for irradiance values up to 10 kW m−2 at a blackbody temperature around 2900 °C. In order to verify the accuracy improvement, the results are compared with those of a detector-based calibration, which demonstrated an agreement within the uncertainty.

142

, and

We used a quasi-spherical cavity as an acoustic and microwave resonator to measure the thermodynamic temperatures, T, of the triple points of equilibrium hydrogen, neon, argon and mercury and to measure the difference TT90, in the range 7 K to 273 K. (T90 is the temperature on the International Temperature Scale of 1990 (ITS-90).) In the range 7 K to 24.5 K, our preliminary values of TT90 agree with recent results from dielectric-constant gas thermometry and achieve uncertainties that are comparable to or smaller than those achievable using the interpolating constant volume gas thermometer as currently defined on the ITS-90. In the range 90 K to 273 K, the present results for TT90 obtained using a helium-filled, copper-walled, quasi-spherical cavity agree with earlier results obtained using argon-filled, steel-walled or aluminium-walled, spherical cavities. The agreement confirms our understanding of both acoustic and microwave cavity resonators and demonstrates that resonators function as primary thermometers spanning wide temperature ranges. The mutually consistent acoustic thermometry data from several laboratories imply that the values of (TT90)/T90 are 5 times larger than the uncertainty of T/T90 near 150 K and near 400 K. They also imply that the derivative dT/dT90 is too large by approximately 10−4 near 273.16 K and that dT/dT90 has a discontinuity of 4 × 10−5 at 273.16 K.

163

, , , , , and

A new coaxial measurement system has been developed to investigate the ac longitudinal resistance along the high-potential side of the quantum Hall resistance (QHR). A novel equivalent circuit of the QHR is used to analyse the ac measurements of the longitudinal resistances along both the low- and high-potential sides of the QHR sample. In addition, a bridge for the measurement of ac contact resistances of the sample is presented. For the first time, it is now possible to perform all the ac measurements whose dc equivalents are well established for reliable dc quantum Hall measurements. While the ac longitudinal resistances on the high- and low-potential sides of the sample are very similar, interesting differences have been observed at high resolution.

INTERNATIONAL REPORT

ERRATUM

183
The following article is Free article

Table 2 was incorrect due to a sign error of the viscosity ratios CN2(gas) and a misinterpretation of literature values for helium, nitrogen and argon. The corrected table is given in the PDF file. For helium, the corrected measured value differs from the ab initio calculated value of (19.823 ± 0.006) µPa s by twice the combined uncertainty of 0.05%.