Brought to you by:

Table of contents

Volume 46

Number 3, June 2009

Previous issue Next issue

SHORT COMMUNICATION

L11

The third edition of the International Vocabulary of Metrology (VIM) has introduced the concept of kind of quantity as the 'aspect common to mutually comparable quantities'. While the concept is a fundamental one, as it is relevantly used in several definitions throughout VIM, its definition is critical for several reasons. Not only is the division of the concept of 'quantity' according to 'kind of quantity' to some extent arbitrary, as noted in the Vocabulary, but also the distinction between the concepts of 'quantity' and 'kind of quantity' is to some extent arbitrary. This article discusses this subject and suggests a possible solution to some of the issues identified.

LETTERS TO THE EDITOR

L16

This letter addresses the calculation of molar mass and related quantities in the updated version of the SI (most often called the 'New SI' but sometimes the 'Quantum SI') currently under discussion by the International Committee for Weights and Measures and its Consultative Committee for Units and which could be adopted by the next General Conference on Weights and Measures in 2011.

PAPERS

145

The law of propagation of uncertainty requires the proper treatment of dependence between the estimates of quantities influencing the measurement. Sometimes these estimates are themselves obtained in previous separate measurements, and the correlation between these estimates is not recorded. In such a case, the corresponding correlation coefficient or covariance will wrongly be deemed to be zero. In other situations, the number of input or output quantities might be large and information about dependence might not be recorded because of the apparent need for the construction of a large matrix of covariances. This paper presents (i) a method whereby information about dependence is recorded in a table with, typically, fewer columns than the matrix and (ii) a corresponding re-expression of the law of propagation of uncertainty. Examples of the application of the method are given.

154

Ideally, the output of one calculation of measurement uncertainty should be usable as an input to another calculation. This paper describes (i) how the output distribution of a Monte Carlo (MC) evaluation of uncertainty can be summarized for use in another analysis and (ii) how a general probability distribution can be summarized for efficient use as an MC input distribution. The principal technique discussed involves fitting an asymmetric form of 'lambda distribution' to the summarizing data. This distribution is defined by the inverse of its distribution function, so the generation of random samples from this distribution is straightforward.

The inverse of the distribution function is known as the 'quantile function'. The principle advocated is that of working with distributions with convenient quantile functions instead of distributions with convenient probability density functions. This principle is applicable whether the distributions represent sampling distributions, as in frequentist statistics, or patterns of belief, as in Bayesian statistics.

167

Maxwell's equations in the context of Floquet's theorem for construction of a sequence of periodic precision air lines with surface roughness and ohmic resistance in the conductor walls offer a solution for finding the propagation constants in correspondence to the principal transverse magnetic mode. Boundary conditions to define surface roughness and ohmic wall loss through a tangent surface vector expression to incorporate composite axial and radial electromagnetic field components enable the construction of a system determinant to extract the propagation constant.

178

, , , and

A method is presented that can be used for the evaluation of the value of the Earth gravity field at any defined position inside or outside an instrument used for physical or metrological experiments. After a brief presentation of the gravimeters used and the evaluation of their respective uncertainties, we describe in detail the procedure developed to determine the gravitational acceleration g at the point of interest. Finally, a realistic evaluation of the uncertainty budget shows that the method can be used in many metrological applications and especially in all experiments aiming at a new definition of the kilogram by virtual comparison of mechanical and electromagnetic power.

In this paper, we denote with g the Earth gravity field or gravitational acceleration, which is the sum of all accelerations felt by a free falling body at the surface of the Earth.

187

, , , and

We revisited the treatment of the influence of the support conditions and the resulting bending of a scale on the positions of its graduation lines. In contrast to earlier publications, we did not calculate the position deviation with respect to the case without any bending. Due to the production processes used today, this is inappropriate and leads to an overestimation of the related uncertainty contribution. We also extended the treatment from two lines to all lines of the scale and included a finite starting point of the graduation. We verified our analytical model by means of FEM calculations. In addition, we showed that a first order Taylor expansion yields sufficiently accurate results for the position deviations and leads to simple equations for their size. Because line scale measurements are related to the zero line the constant term of the Taylor expansion cancels and the remaining coefficient is identical to the sensitivity coefficient required in the determination of the standard uncertainty contribution. If it is sufficient to suppress the position dependence and to consider only the case where the sample is supported at symmetric positions, then a new, simple equation is obtained for the resulting uncertainty contribution. Finally we showed that due to the position dependence of the deviations the scale coefficient of the scale, which is obtained by a linear fit to the deviations of the line positions of the scale from their nominal values, is also influenced, which has apparently not been noticed up to now. If the line standard used to disseminate the unit of length has not been designed carefully then the resulting change in the scale coefficient exposes a practical limit to the related achievable length-dependent uncertainty contribution.

196

, , , and

This contribution discusses the relative roles of the standard formulation for the density of liquid water recommended for metrology by the International Committee for Weights and Measures (CIPM) in 2001 and the thermodynamic property formulation adopted by the International Association for the Properties of Water and Steam (IAPWS) in 1995. The two formulations give consistent results for densities in the region of validity of the CIPM standard. Guidelines are presented for the appropriate use of the two formulations.

199

, and

A review of the state of the art of electrochemical methods at the highest metrology level in national metrology institutes (NMIs) is given, with emphasis on standardization work (primary methods) in the fields of pH and electrolytic conductivity, as well as use of coulometry. Attention is also given to certain technical issues in the implementation of these methods.

214

, , , , , , , , , et al

Since the 1st International Comparison of Absolute Gravimeters (ICAG) and accompanying Relative Gravity Campaign (RGC) held at the BIPM in 1981, repeated ICAG-RGCs have been organized every four years. A total of 19 absolute gravimeters (AG) and 15 relative gravimeters (RG) participated in the 7th ICAG-RGC, which took place in 2005. Co-located absolute and relative gravity measurements as well as precision levelling measurements were carried out.

The final version of the absolute g values of the 7th ICAG has been officially released recently. This paper is the final report of the 7th RGC and replaces the preliminary results published earlier. It covers the organization of the RGC and the data processing, analyses RG behaviour, computes g, δg and OAG (offset of AG) and discusses their uncertainties. In preparation for the BIPM key comparison ICAG-2009, a standard data-processing procedure has been developed and installed in the BIPM ICAG-RGC software package, GraviSoft. This was used for the final data processing.

227

, , , , , and

We report on Korea's first optically pumped primary frequency standard (KRISS-1) developed at the Korea Research Institute of Standards and Science. Although a relatively small microwave cavity (the cavity drift region is about 36 cm in length) has been employed in this experiment, we achieved our desired frequency accuracy by applying the regularized inverse on the Ramsey spectrum analysis and compensating the effect of the evanescent field inside the cavity. KRISS-1 typically performs at a short term stability of with a combined uncertainty of 1.0 × 10−14.

237

, and

Uncertainty evaluations for kinematic viscosity measurements using a series of capillary master viscometers are given, based on model equations considering precise corrections for the U-tube-type viscometers used at the National Metrology Institute of Japan (NMIJ). Results of viscometer constant calibrations for the NMIJ viscosity scale calibrated by stepping up to high viscosities, and the kinematic viscosity measurements of standard liquids with master viscometers at temperatures from −40 °C up to 100 °C, are evaluated in detail. The relative expanded uncertainties at k = 2 for kinematic viscosity measurements of standard liquids are 0.04% to 0.17% in the viscosity range up to 160 000 mm2 s−1.

249

, and

To base the kilogram definition on the atomic mass of the 28Si atom, the present relative uncertainty of the 28Si lattice parameter must be lowered to 3 × 10−9. A new experimental apparatus capable of a centimetre measurement-baseline has been used to remeasure the lattice parameter of crystals MO*4 of INRIM and WASO 4.2a of PTB. The comparison between these determinations is intended to verify the measurement capabilities and to assess the limits of this experiment.

254

and

After the Gaussian distribution, the probability distribution most commonly used in evaluation of uncertainty in measurement is the rectangular distribution. If the half-width of a rectangular distribution is specified, the mid-point is uncertain, and the probability distribution of the mid-point may be represented by another (narrower) rectangular distribution then the resulting distribution is an isosceles trapezoidal distribution. However, in metrological applications, it is more common that the mid-point is specified but the half-width is uncertain. If the probability distribution of the half-width may be represented by another (narrower) rectangular distribution, then the resulting distribution looks like an isosceles trapezoid whose sloping sides are curved. We can refer to such a probability distribution as an isocurvilinear trapezoidal distribution. We describe the main characteristics of an isocurvilinear trapezoidal distribution which arises when the half-width is uncertain. When the uncertainty in specification of the half-width is not excessive, the isocurvilinear trapezoidal distribution can be approximated by an isosceles trapezoidal distribution.

261

and

A recent supplement to the GUM (GUM S1) is compared with a Bayesian analysis in terms of a particular task of data analysis, one where no prior knowledge of the measurand is presumed. For the Bayesian analysis, an improper prior density on the measurand is employed. It is shown that both approaches yield the same results when the measurand depends linearly on the input quantities, but generally different results otherwise. This difference is shown to be not a conceptual one, but due to the fact that the two methods correspond to Bayesian analysis under different parametrizations, with ignorance of the measurand expressed by a non-informative prior on a different parameter. The use of the improper prior for the measurand itself may result in an improper posterior probability density function (PDF) when the measurand depends non-linearly on the input quantities. On the other hand, the PDF of the measurand derived by the GUM supplement method is always proper but may sometimes have undesirable properties such as non-existence of moments.

It is concluded that for a linear model both analyses can safely be applied. For a non-linear model, the GUM supplement approach may be preferred over a Bayesian analysis using a constant prior on the measurand. But since in this case the GUM S1 PDF may also have undesirable properties, and as often some prior knowledge about the measurand may be established, metrologists are strongly encouraged to express this prior knowledge in terms of a proper PDF which can then be included in a Bayesian analysis. The results of this paper are illustrated by an example of a simple non-linear model.

267

, , and

In this paper Bayesian analysis is applied to assign a probability density to the value of a quantity having a definite sign. This analysis is logically consistent with the results, positive or negative, of repeated measurements. Results are used to estimate the atom density shift in a caesium fountain clock. A comparison with the classical statistical analysis is also reported and the advantages of the Bayesian approach for the realization of the time unit are discussed.

272

, , , , , and

Systematic errors due to evanescent field inside an E-plane type Ramsey cavity are corrected by introducing an extended square pulse profile in our Ramsey spectral analysis. This procedure is crucial for the proper operation of thermal beam based atomic frequency standards with short cavities. By accounting for the spatial extent of the field inside the cavity, the method enables a correct calculation of transition probability for atoms at the interaction region. The authors demonstrate that the systematic errors in quadratic Doppler and end-to-end cavity phase shifts can be reduced by an order of magnitude.

277

, , and

A new differential nanoforce facility, based on a disc-pendulum with electrostatic stiffness reduction and an electrostatic force compensation for the measurement of horizontal forces in the range below 1 µN, is presented. First measurements in air over an averaging time of 50 s show a noise level of the facility of 42 pN. The method and the results of measuring the light pressure of a red He–Ne laser with a power of 7 mW (FL = 47 pN) are presented. The force measurement uncertainty of the device is below 5%, for a force to be measured of 1 nN and a measuring duration of 50 s.

283

, , , and

The design and first results of two free-fall absolute gravimeters are reported: a stationary gravimeter is designed and can be used as a reference system and a portable gravimeter is aimed at field measurements.

The determination of the acceleration due to gravity is done interferometrically in both instruments. The whole fringe signal is digitized by a high-speed analogue-to-digital converter, which is locked to a rubidium frequency standard. This fringe recording and processing is novel as compared with commercial free-fall gravimeters, which use an electronic zero-crossing discrimination. Advantages such as the application of a zero-phase-shifting digital filter to the digitized data are depicted. The portable gravimeter's mechanics deviate from the conventional type. Springs are used to accelerate and decelerate the carriage supporting the falling object.

A detailed uncertainty budget is given for both gravimeters. The combined standard uncertainty for the portable and for the stationary gravimeter is estimated at 38.8 µGal and 16.6 µGal, respectively. The corresponding statistical uncertainties are 1.6 µGal (over one day of measurement) and 0.6 µGal (over one month of measurement).

The different designs and dimensions of the new free-fall gravimeters can help to reveal unknown or so far underestimated systematic effects. The assessments of the uncertainties due to seismic noise and shock vibrations, and electronic phase shifts give validity to this assumption.

298

, and

The method for realization of the kilogram using 'superconducting magnetic levitation' was re-evaluated at MIKES. The realization of the kilogram based on the traditional levitation method is limited by the imperfections of the superconducting materials and the indefinable dependence between supplied electrical energy and the gravitational potential energy of the superconducting mass. This indefiniteness is proportional to the applied magnetic field and is caused by increasing losses and trapped magnetic fluxes. A new design of an electromechanical system for the levitation method is proposed. In the proposed system the required magnetic field and the corresponding force are reduced, as the mass of the body (hanging from a mass comparator) is compensated by the reference weight on the mass comparator. The direction of the magnetic force can be upward (levitation force, when the body is over the coil) or downward (repulsive force, when the body is under the coil). The initial force to move the body from the coil is not needed and magnetic field sensitivity is increased, providing linearization of displacement versus applied current. This new construction allows a lower magnetic induction, reduces energy losses compared with previous designs of electromechanical system and reduces the corresponding systematic error.

305

and

The international UTC/TAI time and frequency transfer network is based on two independent space techniques: Two-Way Satellite Time and Frequency Transfer (TWSTFT) and Global Navigation Satellite System (GNSS). The network is highly redundant. In fact, 28% of the national time laboratories, which contribute 88% of the total atomic clock weight and all the primary frequency standards to UTC/TAI, operate both techniques. This redundancy is not fully used in UTC/TAI generation. We propose a combination that keeps the advantages of TWSTFT and GNSS and offers a new and effective strategy to improve UTC/TAI in terms of accuracy, stability and robustness. We focus on the combination of two BIPM routine products, TWSTFT and GPS PPP (time transfer using the precise point positioning technique), but the proposed method can be used for any carrier phase-based GNSS product.

315

and

The texture of surfaces within a piston–cylinder assembly (PCA) can influence the pressure performance of gas-operated dead weight pressure balances (DWPBs). In order to systematically study this response, it has been necessary to design, develop and manufacture uniquely interchangeable 35 mm diameter PCAs for use in a novel hybrid gas-operated DWPB with high mechanical, thermal and pressure stability. This work reports the development of the PCAs and the validation of the DWPB design, allowing the performance characteristics of the interchangeable PCAs to be understood, in terms of variations of effective area calculations. This is achieved by investigating the pressure responses of the DWPBs by changing the speed and direction of rotation. The results demonstrate the stability of the gas-operated DWPB design when used in gauge mode, and importantly allow the verification of the performance of the interchangeable PCAs.

323

The usefulness of weighted means statistics as a consensus mean estimator in collaborative studies is discussed. A random effects model designed to combine information from several sources is employed to justify their appeal to metrologists. Some methods of estimating the uncertainties and of constructing confidence intervals are reviewed.

332

and

There has been considerable discussion about the merits of redefining four of the base units of the SI, including the mole. In this paper, the options for implementing a new definition for the mole based on a fixed value for the Avogadro constant are discussed. They are placed in the context of the macroscopic nature of the quantity amount of substance and the opportunity to introduce a system for molar and atomic masses with unchanged values and consistent relative uncertainties.

339

, , , and

This paper describes a new approach to link air and vacuum mass measurements using magnetic levitation techniques. This procedure provides direct traceability to national standards, presently defined in ambient air. We describe the basic principles, challenges, initial modelling calculations and performance expectations.

345

, and

A generalized statistical approach for interlaboratory comparisons with linear trends is proposed. This new approach can be applied to the general case when the artefacts are measured and reported multiple times in each participating laboratory. The advantages of this approach are that it is consistent with the previous approaches when only the pilot lababoratory makes multiple measurements and it applies whether or not there exists a trend. The uncertainties for the comparison reference value and the degree of equivalence are also provided. As an illustration, the method is applied to the SIM.EM-K2 comparison for resistance at the level of 1 GΩ.

351

The uncertain-numbers method (Hall B D 2006 Metrologia43 L56–61) is an alternative computational procedure to the Law of Propagation of Uncertainty (LPU) described in the Guide to the Expression of Uncertainty in Measurement. One advantage of the method is that data processing can be carried out in an arbitrary series of steps and much of the mathematical analysis normally associated with the LPU can be automated by software. This has applications for measuring systems, which are modular in design and use internal data processing to apply corrections to raw data. Several scenarios involving radio frequency power measurements are used to illustrate the new method in this context. The scenarios show something of the difficulty inherent in calculating uncertainty for modern measurement systems and in particular highlight the occurrence of systematic errors arising from internal instrument correction factors. Such errors introduce correlation to a series of measurements and must be handled with care when functions of results, such as means, differences and ratios, are required.

359

, , , and

The Metrology Light Source (MLS), the electron storage ring of the Physikalisch-Technische Bundesanstalt (PTB), is operated as a primary radiometric source standard from the near infrared (NIR) to the vacuum ultraviolet spectral region with calculable synchrotron radiation according to electromagnetic theory (Schwinger equation). The operational parameters of the MLS can be varied in a wide range to adjust the spectral distribution and the intensity of the resulting spectrum to the specific measurement requirements. The electron beam energy can be set to values between 105 MeV and 630 MeV, and the electron beam current can be varied from 1 pA (one stored electron) up to 200 mA.

Using two calibrated filter radiometers with centre wavelengths of 476 nm and 1595 nm as transfer standards, the calculated spectral radiant power of the MLS into a well-defined aperture was compared with the spectral irradiance responsivity scale of PTB realized by cryogenic radiometers in the visible and NIR spectral range. The measurements were performed with the MLS operated at various electron energies. Good agreement was found within the combined relative uncertainties.

367

and

This work presents the uncertainties in the effective radiating area (AER) and beam non-uniformity ratio (RBN) for ultrasound (US) transducers in the range of 1.0 MHz to 3.5 MHz, and for head diameters of 1.27 cm and 2.54 cm. Measurements were performed using the US pressure field mapping system developed at the Laboratory of Ultrasound located at the Brazilian National Metrology Institute, which provides national traceability for the assessed quantities. The calculation protocol was developed based on Standard IEC 61689:2007. The type A uncertainty was estimated from four repetitions of the full measurement procedure for determinations of AER and RBN, and the type B uncertainty was estimated from mathematical models for both parameters, based on IEC 61689:2007 and the ISO 'Guide to the Expression of Uncertainty in Measurement'. The maximum combined expanded uncertainties (95% confidence level) were 6.8% for AER and 14.9% for RBN.

375

, , , , and

Quantitation of the trace amount of DNA by counting individual DNA molecules using a high-sensitivity flow cytometric setup has been developed and evaluated for the purpose of establishing a reference analytical procedure. Model DNA molecules, represented by lambda (λ) viral DNA (48 502 bp, double-stranded), were electro-focused to form a tightly bound flow stream on a detection point situated on the centre axis of fused silica tubing measuring 50 µm × 50 µm. The individual DNA particles that were stained with a fluorescent dye were detected individually with a high-sensitivity laser-induced fluorescence (LIF) detection system. Assuming all DNA particles in a given sample volume were detected and counted ('exhaustive counting'), its molar concentration can be calculated without the need for calibration materials. The validity of the proposed measurement method was thoroughly examined and discussed.