Table of contents

Volume 12

Number 9, September 2001

Previous issue Next issue

SPECIAL FEATURE: EUROMECH COLLOQUIUM ON 3C STEREO AND HOLOGRAPHIC PIV. APPLICATION TO TURBULENCE MEASUREMENTS

001

Particle image velocimetry (PIV) is a well-established and powerful technique for performing whole-field velocity measurements under a broad range of flow conditions, and is widely used in fluid mechanics laboratories. The aim of this feature issue of Measurement Science and Technology is to provide the reader with an overview of some recent progress in PIV applications developed in the EUROMECH 411 colloquium, which was held at CORIA, Rouen University, France from 29 to 31 May 2000. The topics of the meeting organized by M Trinité and B Lecordier were focused on three main aspects:

  • Advanced PIV techniques (2D2C, 2D3C or 3D) used to study turbulent flows.

  • Accuracy and limitations of PIV treatments for measurement of turbulence properties.

  • Relevant post-processing tools for PIV data to extract information on turbulent flows.

In total, 24 papers were presented on these topics and four keynote lectures were given by recognized specialists. Seven papers have been selected by the EUROMECH Scientific Committee for publication in this feature issue.

If 3C, 3D (4D with time) is the ultimate goal of PIV developments, then holography is the only practical way to extend sheet-oriented PIV to measuring a volume of considerable depth in 3D. However, one major problem is the pollution from particles located at different depths: image quality is spoiled by noise from out-of-focus particles. Digital holography (Coëtmellec et al) using a 2D wavelet algorithm is a very simple promising new method in this area despite poor resolution. Stereoscopic PIV, using two cameras viewing the flow field from different angles, is one technique for measuring all three components in a fluid plane. To reconstruct the three-dimensional velocity field, a calibration stage is necessary since the imaging geometry causes significant aberrations. A 3D vector is calculated at a physical point in the object plane. The first step consists in computing two 2D vectors from both cameras at the physical point. Two methods are discussed (warping and mapping) in the paper by Coudert and Schon.

Cross-correlation based PIV algorithms have a limited spatial resolution, which is determined by the size of the measurement window. The study and development of an integrated PIV-PTV method is described by Stitou and Riethmuller. This combines the robustness of Particle Image Velocimetry with the spatial resolution of Particle Tracking Velocimetry for super resolution analysis.

Different experimental studies using known flow fields such as grid turbulence or pipe flows have been used to try to show the capability of the PIV technique to extract the information from turbulent vector maps. Nevertheless, from these studies, it always seems difficult to evaluate which treatment is the most accurate to measure a given flow. This is mainly due to the difficulty in knowing the exact characteristics of the flows. A solution can be to numerically impose the properties of the flow and to generate synthetic images (Lecordier et al). This approach is now possible by using Direct Numerical Simulation (DNS).

If PIV has now evolved from its embryonic state to reach a certain level of confidence, we are now facing a problem directly linked to the high spatial resolution and frequency of acquisition that can nowadays be achieved by manufactured CCD cameras: the overabundance of information, which has to be appropriately reduced if one desires to draw conclusions from the data set. Two recent tools can now be used to resolve the problem of unsteady turbulent flows and to try to separate the fluctuation intensity due to the unsteadiness and the turbulence itself: Proper Orthogonal Decomposition (POD) and wavelet transform analysis. Promising results are obtained using POD in rotating flows (Graftieaux et al, Patte-Rouland et al). With regard to spatial correlation and pattern recognition this tool is also promising in evaluating the contribution of coherent structures to the generation and self-sustaining of wall turbulence in boundary layers. Wavelet transform analysis can be applied to analyse temporal evolution of coherent structures and to characterize the leading vortex generated in a starting flow (Schram and Riethmuller).

We believe that this special feature provides a valuable overview of the proceedings at the colloquium and we would like to express our thanks to all the participants and to the contributing authors and reviewers who have made this special feature possible.

1371

and

In stereoscopic PIV mapping and warping algorithms are used to back-project the image information on the object plane. The mapping algorithm back-projects each pixel, whereas the warping algorithm back-projects the 2D displacement vectors. In this paper, both algorithms and the misalignment of the laser sheet regarding the object plane are considered. Then, a new algorithm using both mapping and warping algorithms is described for a 2D3C angular stereoscopic PIV set-up with standard lenses and the so-called Scheimpflug condition. The accuracy of the new algorithm, that takes into account laser sheet misalignment, is compared to the previous algorithm on the same sets of images. These sets are recorded from a paper pattern mounted on a 3D translation and rotation stage and from a turbulent pipe flow. The tests show that the accuracy benefit is one order of magnitude on these sets. The accuracy gain depends on the laser sheet misalignment magnitude, which can be measured using the mapping algorithm.

1382

, , , and

An original method, based on the direct numerical simulation (DNS) of two phase flows, is proposed to evaluate the capability for the PIV algorithms to measure turbulence properties. 3D particle fields are produced from the DNS and are used to generate synthetic images with well-defined flow properties. The imposed velocities in the synthetic images are evaluated thanks to different PIV algorithms and these results can be compared with the initial conditions of simulation.

The first part of the present paper describes the synthetic image generator and the DNS of two phase flows. The second part is focused on the analysis of synthetic images of homogeneous turbulence by using two different PIV algorithms: a conventional PIV algorithm and a `sub-pixel' iterative approach.

1392

, , and

We present the preliminary results of a digital holographic system that can determine the two-dimensional velocity vector fields in several slices of a sample volume. A CCD camera directly records the diffraction patterns of small particles illuminated by a double-pulse laser diode. In fact, the diffraction can be interpreted as a convolution with a wavelet family of functions. The scale parameter a is related to the distance z between a particle and the CCD camera. Then, the intensity distributions in a plane located at a distance z are reconstructed by computing the wavelet components for the corresponding scale parameter a. Afterwards, a particle image velocimetry algorithm is applied to the numerically reconstructed pair of images. The feasibility of this technique is demonstrated for two simulated displacements.

1398

and

This work constitutes the study and the development of an integrated PIV-PTV method, which combines the advantages of the two approaches: the robustness of PIV and the spatial resolution of PTV. The technique is a hybrid algorithm that starts by statistical evaluation of a tracer's displacement (PIV analysis) and further refines the resolution of measurement with the tracking of individual particles (PTV). This strategy is referred to as the super resolution analysis. This work presents and critically discusses the principles of this processing. Key points of the method are emphasized. This work focuses mainly on the particle extraction and on the particle tracking steps aiming at investigating monophasic flows. Efforts to face imaging conditions for highly seeded flows and to improve the robustness of the technique are made. Tests on real images are presented and discussed.

1404

, , and

This paper discusses the application of proper orthogonal decomposition (POD) to the particle image velocimetry (PIV) velocity fields of the recirculation zone of an annular jet. The annular jet is an example of complex shear flow situations. Two axisymmetrical shear layers, originating at the exit of the jet, one at the lip of the nozzle and the other in the centre of the body, eventually meet downstream or interact with each other. In this study we propose use of the POD on PIV velocity fields of the recirculation zone of this annular jet. According to POD analysis, the primary objective of this paper is to describe the mode effects on the understanding of the nature of the flow and especially the link between the first eigenfunctions of POD and the motion of the stagnation point at the extremity of the recirculation zone of the annular jet. In particular, this allows us to dissociate the oscillation and velocity fluctuations due to the turbulent flow behaviour.

1413

and

Digital particle image velocimetry and wavelet analysis are combined in this work in order to study the characteristics of the leading vortex ring generated in an impulsively starting jet flow. The wavelet analysis allows one, by virtue of its properties of selectivity in space and scale, to detect coherent structures and to compute their position and size and, through further processing, their convection velocity and circulation. It has been observed that the energy of the leading vortex ring continuously increases even after it has pinched off from the jet shear layer generating it.

1422

, and

Particle image velocimetry (PIV) measurements are made in a highly turbulent swirling flow. In this flow, we observe a coexistence of turbulent fluctuations and an unsteady swirling motion. The proper orthogonal decomposition (POD) is used to separate these two contributions to the total energy. POD is combined with two new vortex identification functions, Γ1 and Γ2. These functions identify the locations of the centre and boundary of the vortex on the basis of the velocity field. The POD computed for the measured velocity fields shows that two spatial modes are responsible for most of the fluctuations observed in the vicinity of the location of the mean vortex centre. These two modes are also responsible for the large-scale coherence of the fluctuations. The POD computed from the Γ2 scalar field shows that the displacement and deformation of the large-scale vortex are correlated to these modes. We suggest the use of such a method to separate pseudo-fluctuations due to the unsteady nature of the large-scale vortices from fluctuations due to small-scale turbulence.

REVIEW ARTICLE

R53

In the last century the production and application of halides assumed an ever greater importance. In the fields of medicine, dentistry, plastics, pesticides, food, photography etc many new halogen containing compounds have come into everyday use. In an analogous manner many techniques for the detection and determination of halogen compounds and ions have been developed with scientific journals reporting ever more sensitive methods.

The 19th century saw the discovery of what is now thought of as a classical method for halide determination, namely the quenching of fluorescence. However, little analytically was done until over 100 years after its discovery, when the first halide sensors based on the quenching of fluorescence started to emerge. Due to the typical complexity of fluorescence quenching kinetics of optical halide sensors and their lack of selectivity, they have found little if any place commercially, despite their sensitivity, where other techniques such as ion-selective electrodes, x-ray fluorescence spectroscopy and colorimetry have dominated the analytical market.

In this review article the author summarizes the relevant theory and work to date for halide sensing using fluorescence quenching methods and outlines the future potential that fluorescence quenching based optical sensors have to offer in halide determination.

PAPERS

1431

and

The degree of equivalence of national measurement standards is established by means of key comparisons. The analysis of data from a key comparison requires the determination of a reference value, which is then used to express the degree of equivalence of the national measurement standards. Several methods for determining a reference value are available and these methods can lead to different results. In this study current methods for determining a reference value are compared. In order to quantitatively assess the quality of performance, the methods are applied to a large set of simulated key comparison data. The simulations refer to several realistic scenarios, including correlated measurements. Large differences in the results can occur and none of the methods performs best in every situation. We give some guidance for selecting an appropriate method when assumptions about the reliability of quoted uncertainties can be made.

1439

, and

Non-parametric statistical tests are usually adopted to recognize drift phenomena in data series acquired by automatic measuring systems. Sometimes different tests provide different responses with regard to the presence of drift in data sets. In particular, the application of two commonly used statistical tests (i.e. the Wald-Wolfovitz test, also known as the run test, and the Mann-Whitney test, also known as the reverse arrangement test) to data acquired during long-term trials showed that they exhibited different behaviours regarding shift detection. Thus, a deeper examination of previously indicated statistical tests in real measurements appears of interest; more precisely their effectiveness is verified (i) when cyclical and monotonous drift are concurrently present and (ii) various levels of noise are superimposed on the data gathered. Finally, on the basis of the theoretical results obtained, the statistical tests here examined are applied to the same data set in order to validate their capability in on-line drift recognition.

1445

This paper describes the development of a six-component force/moment sensor with plate beams which may be used in industry for measuring forces Fx, Fy and Fz and moments Mx, My and Mz simultaneously and the evaluation of its relative expanded uncertainty. In order to develop the six-component force/moment sensor with small capacity (Fx of the sensor (x-direction force sensor), Fy and Fz are each 100 N, Mx of the sensor (x-direction moment sensor) and My are 1 N m, Mz is 2 N m), the structure of such a sensing element for the six-component force/moment sensor is newly modelled, designed and manufactured. Also, methods for calibration and evaluation of the relative expanded uncertainty are newly proposed. The six-component force/moment sensor developed here is calibrated with the proposed calibration method and the relative expanded uncertainty is evaluated using the proposed uncertainty evaluation method and the calibration results. This reveals that the relative expanded uncertainty of the six-component force/moment sensor is less than 2.78%. Thus, it is thought that this six-component force/moment sensor can be usefully used in industry and that the methods for calibration and evaluation of the uncertainty can also be used for calibration and evaluation of the uncertainty of the multi-component force/moment sensor.

1456

The noise of two types of input stages, that of a high-input-impedance buffer and that of a current-to-voltage converter, operated with high-impedance sources at relatively high frequencies, was analysed. It was found that the noise may achieve, at first sight, unexpectedly high values. In buffers the biasing circuit's noise is multiplied by the bootstrap. The input capacitance and the parasitic capacitance of the biasing circuit reduce the bandwidth but without affecting the signal-to-noise ratio. The input capacitance or any other capacitance connected to the input of the current-to-voltage converters may increase the noise gain by orders of magnitude and significantly reduce the signal-to-noise ratio. The results throw doubt on the frequently accentuated advantage of using current-to-voltage converters for amplifying low-level signals from high-impedance sources, at least at high frequencies.

1465

, and

To investigate the flow characteristics of water in a porous medium with a second, stagnant, immiscible liquid phase, we experimentally studied flow of water through a bed of glass beads containing stagnant silicone oil. For imaging, two magnetic resonance imaging (MRI) techniques were used: the chemical shift imaging method and the phase-encoding method. Two-dimensional images both with and without the silicone oil showed that the water velocity profiles drastically changed because the stagnant silicone oil changed the pore structure. Significant changes in flow were also confirmed by composing eleven two-dimensional images of velocity profiles to visualize the three-dimensional flow path and the stagnant silicone oil distribution in the porous media. A statistical method was developed to clarify the relation between pore structure and fluid flow. This method showed that, when stagnant silicone oil is present, but with the same net flow of water, the reduction in rate of flow from large, oil-blocked pores is compensated by an increase in rate of flow from the larger number of small pores.

1473

, , and

In the paper, a semiconductor laser factor changeable sinusoidal light frequency modulation model for the interferometric displacement sensor is put forward mathematically. The shortcomings of the interferometric system based on light frequency ramp modulation, such as Gibb's phenomenon, the spurious sideband effect and the relatively small range of the displacement measurement, are overcome. In particular, by the proper mathematical deductions, the direction of the phase shift according to movement of the measurand (the tendency towards an increment or decrement) can be shown. Experimental results show that our new system is more stable and accurate than the light frequency ramp modulated one.

1480

, , and

Double beam modulation is widely used in atomic collision experiments in the case where the noise arising from each of the beams exceeds the measured signal. A method for minimizing the statistical uncertainty in a measured signal in a given time period is discussed, and a flexible modulation and counting system based on a low cost PIC microcontroller is described. This device is capable of modifying the acquisition parameters in real time during the course of an experimental run. It is shown that typical savings in data acquisition time of approximately 30% can be achieved using this optimized modulation scheme.

1486

, and

We have developed experimental and analytical methods to measure the anisotropic elastic properties of thin supported films. In this approach, surface acoustic waves (SAWs) were generated using a line-focused laser. Waves with frequency components up to 400 MHz were detected by a Michelson interferometer. Dispersion relations for the SAW phase velocity were calculated from displacement waveforms acquired with source-detector separations of 5-15 mm. To determine film properties from the dispersion relations, we developed a new inversion algorithm based on the delta-function representation of the fully anisotropic, elastodynamic Green function for wave propagation. Our methods were first applied to an elastically isotropic aluminium film on an isotropic fused silica substrate. The results show the validity of our methods and were in good agreement with literature values. The results also illustrate various aspects of measurement uncertainty. The same SAW methods were used to examine a series of titanium nitride films on single-crystalline silicon wafers. The inversion results assuming orthotropic elastic symmetry indicated that c11 increased and c13 decreased with increasing film thickness. Values for the film thickness determined by our analysis were in good agreement with destructive measurements of the actual thicknesses.

1495

, and

A new multi-degree-of-freedom measurement system for milli-structures is presented. This methodology is based on optical beam deflection method and triangulation. It employs a diffraction grating as a reflective object, which reflects an incident laser beam into several directions. To obtain the pose information of a measured object, the detecting positions of zeroth- and first-order diffracted rays are measured using three two-dimensional detectors. From these values, we can calculate six-degree-of-freedom displacement through a kinematic analysis. The performance was evaluated with resolution, measurement errors and crosstalk. The results show that measurement errors and maximum crosstalk are within ±0.5 µm in translation and ±2'' in rotation. As an application, we measured the multi-DOF motion of a bimorph type PZT actuator and obtained its frequency response function.

1503

, and

In this paper we present a device, called the stress-optic modulator (SOM), that allows us to perform high sensitivity measurements of linear birefringence with low frequency signal. The SOM can be used as a polarization modulator in a heterodyne detection scheme to measure the ellipticity induced on a linearly polarized laser beam. Its operation makes use of the strain produced on a glass window by two blocked PZTs, thus enabling a careful control of the stress and of the anisotropy induced on the isotropic glass. A sensitivity of 3×10-8 Hz-1/2 for an ellipticity signal at the frequency of 1 Hz is obtained. For long term operation of the device a drawback arises from the drift of the spurious birefringences present in the optical system. Through the use of an active feedback system this quasi-static anisotropy can be compensated, enabling much longer integration times without degradation of the sensitivity.

1509

, , and

A novel piezo vibration platform of compact size (120×120×120 mm3) for probe dynamic performance calibration has been developed. A piezo tube is employed to generate movement which is measured in real time by a miniature fibre interferometer and close-loop controlled by a fast digital signal processor, thus the calibration can be made traceable to the national length standard. 20 kHz control-loop frequency with 1.71 nm uncertainty has been achieved. The maximum calibration range is 20 µm with 0.3 nm resolution. The piezo vibration platform can generate up to 300 Hz sinusoidal signal and various other waveforms, such as square, triangle and saw tooth. It can also work in sweep mode to shift the frequency up to 100 Hz continuously, which is a very useful function when the amplitude-frequency response of the probe is to be investigated.

1515

, and

This paper describes the design and experimental evaluation of a radiometric instrumentation system that has recently been developed for the measurement of volumetric concentration, velocity and mass flow rate of pneumatically conveyed solids. The system employs `micro' beam collimation of gamma radiation to generate multiple, parallel interrogation beams of small cross-sectional area. This configuration is shown to almost eliminate the geometrical errors associated with more conventional divergent-beam interrogation. Experimental results obtained off-line using idealized flow models, and also on-line using a pneumatic conveyor, demonstrate the performance of the system and highlight where further development is needed.

1529

and

This paper describes an investigation into a cross correlation flow meter, using `global' resistance sensors, for measuring the homogeneous velocity of inclined oil-in-water flows. The cross correlation flow meter measures the axial propagation speed Ucc of intermittent Kelvin-Helmholtz structures in the flow. It is shown that, for inclination angles in the range 15° to 45° from the vertical, the measured velocity Ucc is dependent only upon the homogeneous velocity and is independent both of the inclination angle θ of the flow and of the oil volume fraction α. For an angle of inclination to the vertical of 60°, Ucc is dependent both upon the homogeneous velocity and upon the oil volume fraction α. For inclination angles in the range 15°⩽θ⩽45° predictions of the homogeneous velocity were obtained using only the measured cross correlation velocity Ucc. For 15°⩽θ⩽45°, the mean percentage error Ē in the predicted homogeneous velocity was 0.1% whilst the standard deviation bar S of the percentage error in the predicted values of the homogeneous velocity was found to be 4.0%. For the angle of inclination to the vertical of 60° predictions of the homogeneous velocity were obtained using the measured cross correlation velocity Ucc and the measured oil volume fraction α. For θ = 60° the mean percentage error Ē in the predicted homogeneous velocity was -0.39% whilst the standard deviation bar S of the percentage error in the predicted values of the homogeneous velocity was 5.2%.

1538

and

A new model for the speed of propagation of kinematic waves in vertical upward, bubbly oil-in-water flows has been proposed. The new kinematic wave model has been used in conjunction with (i) appropriate values for the `distribution parameter' C0, the single-droplet terminal rise velocity vt0 and an exponent n, obtained from a drift velocity model; (ii) a statistical relationship for the quantity αUh(dC0/dα); and (iii) measurements of a cross correlation velocity Ucc and the volume fraction of oil α to make predictions of the superficial velocities of the mixture, the oil and the water in vertical upward, bubbly oil-in-water flows. The systematic errors in these predicted values of the superficial velocities of the mixture, the oil and the water were 0.16, -0.04 and 0.04%, respectively. The kinematic wave model can thus be used in conjunction with a cross correlation flow meter for accurate measurement of flow rates in vertical oil wells. It was inferred from the kinematic wave model and the experimental data that, for low values of the volume fraction of oil α, the distribution parameter C0 decreases rapidly with increasing α. At higher values of α the decrease in the value of C0 with increasing α is less marked. This result is consistent with the physical explanation that, at low values of α, the oil droplets tend to preferentially accumulate in the relatively fast moving regions of the flow. As α increases the oil droplets become more uniformly distributed amongst the faster and slower moving regions of the flow.

1546

and

In this paper it is shown that a drift velocity model of the kind described by Hasan and Kabir can be used to accurately describe the behaviour of inclined oil-in-water flows with a centre body. It is shown, however, that values for the distribution parameter C and single-droplet terminal rise velocity vtθ (which are both required by the drift velocity model) that have been suggested by Hasan and Kabir may be inappropriate for certain flow conditions. In particular, it is shown that the presence of a centre body, of the type used during production logging operations, can have a marked effect on the values of C and vtθ. Finally the authors show that, for oil-in-water flows inclined at angles of up to 60° from the vertical, the drift velocity model can be used in conjunction with values of C and vtθ obtained by calibration experiments to make reasonably accurate predictions of the superficial velocities of oil and water. For the range of flow conditions investigated in the present study it was found that the average percentage error in the predicted oil superficial velocity was equal to 0.4%, the standard deviation of the percentage error in the predicted oil superficial velocity was equal to 6.9% and the average absolute percentage error in the predicted oil superficial velocity was equal to 5.5%. The average percentage error in the predicted water superficial velocity was equal to -0.22%, the standard deviation of the percentage error in the predicted water superficial velocity was equal to 5.6% and the average absolute percentage error in the predicted water superficial velocity was equal to 3.59%. The results presented in this paper will allow better estimates of the volumetric flow rates of oil and water at a given location in the well to be made from measurements of the homogeneous velocity and the oil volume fraction obtained at this location during production logging operations.

1555

This paper concerns the design and development of two special novel thin internal strain gauge balances for testing thin slab delta wings at hypersonic speeds. Of the two balances, one is a thin three-component balance and the other is a thin six-component balance. Unlike the conventional balances, which are circular in cross section, the balances discussed here are flat and have rectangular cross sections. Here, the design philosophy and the metallic architecture of these balances are discussed in detail. This paper is divided into three parts wherein the first part deals with the thin three-component balance the second part addresses the thin six-component balance and, in the third part, typical results obtained using these two balances are presented.

The thin three-component balance has a thickness of 2.5 mm and can be housed inside a wing as thin as 6 mm. Because of its thinness, the axial output exhibited non-linear interaction from the normal force loading. Hence, combined loading is performed. In order to deduce the second-order coefficients, a new method has been proposed and a comparison with the existing method shows that this is better than the previous method. This balance has been used to generate aerodynamic data on thin slab delta wings with and without lee-side balance housing bodies. In this way, it is demonstrated that the lee-side bodies do interfere even when they come into the aerodynamic shadow of the wing.

The thin six-component balance has a thickness of 4 mm and can be housed inside an 8 mm thick wing. Although this balance is also thin, it exhibited excellent linearity in all the bridges. In spite of this, it is subjected to several possible combinations of combined loading and the `work back' loads are calculated using the linear coefficients. The work back loads agree very well with the applied loads. Hence, linear coefficients have been used for the purpose of data reduction. The effects of flap deflection on the aerodynamic characteristics of thin slab delta wings have been studied using this balance. All the experiments were conducted at Mach 8.2 and a Reynolds number of 2.13×106.

1568

, , , and

A thrust measurement system has been developed for the purpose of measuring the thrust produced by a stationary plasma thruster. The measurement system designed and fabricated mainly consists of a thrust balance assembly with strain gauge sensors and associated signal conditioning circuitry. Performance of the system developed was studied, in a vacuum chamber under space simulated conditions, by activating the thruster using xenon as the propellant. Details of the in situ calibration procedure followed are given. The thrust output for discharge powers ranging from 170 to 260 watts was measured and found to be in the range of 4-14 millinewtons. The measurement accuracy and resolution were found to be ±1 mN and 0.3 mN respectively. Specific impulse and thrust efficiency were also estimated.

1576

and

The practical aspects of an advanced schlieren technique, which has been presented by Meier (1999) and Richard et al (2000) and in a similar form by Dalziel et al (2000), are described in this paper. The application of the technique is demonstrated by three experimental investigations on compressible vortices. These vortices play a major role in the blade vortex interaction (BVI) phenomenon, which is responsible for the typical impulsive noise of helicopters. Two experiments were performed in order to investigate the details of the vortex formation from the blade tips of two different helicopters in flight: a Eurocopter BK117 and a large US utility helicopter. In addition to this, simultaneous measurements of velocity and density fields were conducted in a transonic wind tunnel in order to characterize the structure of compressible vortices.

The background oriented schlieren technique has the potential of complementing other optical techniques such as shadowgraphy or focusing schlieren methods and yields additional quantitative information. Furthermore, in the case of helicopter aerodynamics, this technique allows the effect of Reynolds number on vortex development from blade tips to be studied in full-scale flight tests more easily than through the use of laser-based techniques.

1586

and

It is recognized by the international guidelines that it is necessary to offer calibration services for mammography beams in order to improve the quality of clinical diagnosis. Major efforts have been made by several laboratories in order to establish an appropriate and traceable calibration infrastructure and to provide the basis for a quality control programme in mammography. The contribution of the radiation metrology network to the users of mammography is reviewed in this work. Also steps required for the implementation of a mammography calibration system using a constant potential x-ray and a clinical mammography x-ray machine are presented. The various qualities of mammography radiation discussed in this work are in accordance with the IEC 61674 and the AAPM recommendations. They are at present available at several primary standard dosimetry laboratories (PSDLs), namely the PTB, NIST and BEV and a few secondary standard dosimetry laboratories (SSDLs) such as at the University of Wisconsin and at the IAEA's SSDL. We discuss the uncertainties involved in all steps of the calibration chain in accord with the ISO recommendations.

1594

, , , , , , , and

For the calibration of comet mass spectrometers a novel calibration system was designed to simulate the neutral gaseous atmosphere of comets. The facility consists of a fully automated three-chamber ultra-high vacuum system with a separate gas mixing unit. The molecular beam technique is used for the simulation of the dynamic environment of comets. The gas mixing unit allows one to produce molecular beams and static environments of pure water vapour, of mixtures of water vapour and minor gas constituents and also of a wide range of other gases which are expected in the environment of comets. The uniqueness of this system lies in its emphasis on accuracy and long term stability both in the static and in the molecular beam mode. Of particular interest is the determination of the absolute pressure for all gases in the range below 10-6 mbar and the determination of the molecular beam characteristics. Although it is optimized for cometary environments, the calibration system is, by virtue of its versatility, well suited for the calibration of various space instruments over an extended parameter range.

ERRATUM

BOOK REVIEWS

1606

Three decades have passed since Butters and Leendertz published their ground-breaking articles on electronic speckle-pattern interferometry (ESPI). However, it has been during the last 15 years that ESPI, boosted by the availability of affordable and increasingly powerful digital image processing systems, has grown into one of the leading optical measurement techniques, receiving new names such as TV holography or digital speckle-pattern interferometry. Nowadays, it deserves a chapter in almost every book that is published on speckle techniques, holographic interferometry or optical metrology. Despite this, a monograph on DSPI has long been awaited, and here it is.

Pramod Rastogi has edited this comprehensive review that covers most of the concepts, methods and techniques in the field of DSPI. The authors have conveniently classified these items, giving concise descriptions of them, and they provide an extensive set of references for further reading within the scope of each chapter. The contents are fully up-to-date, including such recent developments as temporal phase unwrapping, incremental measurement methods, compact fibre optic systems, techniques for transient analysis using pulsed lasers and high speed cameras, digital holography, etc.

The book is organized into six chapters. The first, by Mathias Lehmann, presents an overview of the statistical properties of speckle that affect the behaviour of electronic speckle interferometers. Besides the usual statistics of the intensity in speckle fields and speckle interferograms, it treats the statistics of decorrelation-induced phase-errors and the optimization of the interferometer parameters using statistical criteria.

The second, third and fourth chapters are dedicated to describing the different techniques of `classical' DSPI. Jonathan Huntley makes a remarkably neat approach to the automated analysis of speckle interferograms, describing both temporal and spatial phase evaluation and phase unwrapping techniques, as well as post-processing methods to convert the optical phase maps into measurements of the parameter under study. Pramod Rastogi reviews the interferometric geometries that provide sensitivity to displacements, derivatives of displacement, shape and slope attending to static measurands. A respectable number of schemes and pictures illustrates the multiple configurations, and many examples of applications give a view of the possibilities of these techniques. The measurement of dynamic events is treated by Andrew Moore, Julian Jones and Jesús Valera. They introduce the principles of the techniques that allow the measurement of periodic and transient vibrations, disclosing some practicalities about the use of continuous and pulsed lasers, fibre optics and diverse types of CCD cameras.

The fifth chapter, by Mikael Sjödahl, is dedicated to digital speckle photography with special emphasis on the measurement of speckle displacements by numerical correlation and on the influence of digital sampling over the error sources.

Finally, in the last chapter, Giancarlo Pedrini and Hans Tiziani present an introduction to what is nowadays the most fashionable technique in the DSPI bunch: digital holographic interferometry. The fundamentals and variants (quasi-Fourier, Fresnel and image-plane) of digital recording and reconstruction of holograms are described, as well as their application to the measurement of displacements, shapes and phase objects.

From my point of view, this book keeps a good balance between theoretical and formal aspects and those more applied or technical. It will be an excellent guide for newcomers to the field of DSPI - either scientists, engineers or postgraduate students - as well as a valuable source of reference for those already involved in the development or application of these rapidly evolving techniques.

Ángel F Doval

1606

The book is divided into four parts: `Systems', `System Components', `Measurements' and `Microprocessor Based Systems'.

The first part of the book introduces Measurement Systems, Performance Terminology, Errors, Dynamic Characteristics, Loading Effects, Noise and Reliability.

The `Measurement Systems' subsection is not sufficiently concise, as only a small number of derived units have been listed. The National Physical Laboratory (NPL) has been publishing posters that provide an excellent description of physical and chemical quantities followed by a small abstract stating their derivation from fundamental quantities. Such description is not only of importance to metrologists; it also assists engineers in seeking innovative solutions to everyday problems. A more systematic approach along these lines would be far more appropriate for this subsection. An introduction to dimensional analysis would also be useful here. Dimensionless quantities and their units or important constants have been omitted, limiting the possible uses of the book.

With regards to the subsection describing `Measurement Errors', I find both the NPL report on the treatment of errors and the pocket book by N C Barford Experimental Measurements: Precision, Error and Truth far more useful and informative than the description found in this pocket book.

The subsection describing `Dynamic Characteristics' of instruments is well written and concise, as would be expected from the author's other publications, although a z-transforms table could also be introduced in a separate section.

I found the subsection describing `Loading Effects' to be useful in summarizing important concepts, especially the part that describes loading of elements in a measurement system.

The subsection on `Noise' is seriously deficient, as there is no mention of voltage and current noise associated with op-amp circuits. Important instruments to measurement science such as phase-sensitive detectors and box-car averagers are also not mentioned. A more useful practical approach can be found in the following books: P J Fish's Electronic Noise and Low Noise Design, F R Connor's pocket book series and T H Wilmshurst's book on Signal Recovery and Noise in Electronic Systems. In addition, digital phase noise is not mentioned at all - a serious omission as many systems are digital nowadays.

I was pleased to find a subsection on the `Reliability of Instruments' as this is a topic that is not sufficiently covered in many classic Measurement and Instrumentation textbooks, an exception being Bentley's Principles of Measurement Systems.

My final criticism of the first part of this pocket book is that in the entire `Systems Section', the notion of feedback instruments could not be found. This is a serious shortfall, bearing in mind the distinct advantages that can be gained with the use of feedback methods.

The second part of the book, `System Components', describes Transducers, Signal Converters and Display Systems. Resistive, capacitive and inductive transducers have been covered but not to the extent found in other books, e.g.Sensors and Transducers by Usher and Keating. I found the tabulated thermoelectric emf values with reference junction at 0 °C useful, although the descriptions of photovoltaic and electrochemical transducers were seriously deficient.

My favorite part of the book is the section describing AC bridges, which clearly distinguishes between the Owen, Maxwell, Hay, De Souty, Wien and Schering configurations. These can trigger the engineer's imagination for a variety of measurement schemes. Unfortunately, the subsection on frequency measurements and digital frequency counters would require significant expansion.

The subsection on `Signal Converters' could also be significantly expanded. For example, the three-amplifier differential input instrumentation amplifier configuration featuring high input impedance and adjustable gain is not mentioned at all.

The subsection describing `Display Systems' may be considered obsolete and of limited use to current practising engineers, as most systems nowadays interface directly with microprocessors.

The third part of the book, `Measurements', covers Chemical Composition, Density, Displacement, Electrical Quantities, Flow, Force, Level, Pressure, Radiation, Stress and Strain, Temperature and Vacuum Sensors.

The `Chemical Composition' subsection is too much of an overview to be of any use to a practising engineer. I found the general diagrams of atomic emission/absorption spectrometry, fluorimetry and mass spectrometry lacking in detail, and of no practical use whatsoever.

In contrast, the section on `Electrical Quantities' was sufficiently detailed. The subsection on `Density' is useful, but that describing `Displacement' can even be misleading. For example, a heterodyne interferometer is described whereas homodyne or superheterodyne systems are not mentioned at all. A description of optical fibre `point', `distributed' or `quasi-distributed' sensors should have also been included in this subsection. Fast-pulse systems are also commonly used in measurement science with applications in chemistry, physics and biology. Their general principle of operation should also have been included.

In addition, a disproportionate amount of detail (8% of the entire book) has been dedicated to the description of the Measurement of Flow, whereas only five pages are devoted to the Force and Radiation topics. This is not justifiable if the author's goal is to give a balanced account of the techniques used in instrumentation and measurement science.

Finally, in part four, `Microprocessor Based Systems' are described. There are two subsections describing `Intelligent Instruments' and `Interfacing'. The former is an overview of Microprocessor Systems, Microcontrollers, Data Acquisition Systems and Data Logging. The `Interfacing' subsection briefly describes a Standard Bus, Centronics and Serial Ports, the I2C Bus, as well as Interfacing Peripherals and Programmable Interfaces.

Here again, my criticism is that feedback instruments where the analogue loop has been replaced by a digital one have not been analysed in detail. Most PID controllers nowadays can easily be implemented in a digital form using microprocessor systems, eliminating possible drifts of analogue components. In addition, oversampling methods for A/D and D/A conversion are not described in sufficient detail; see for example the books Oversampling Delta-Sigma Data Converters edited by Candy and Temes and Advanced Instrumentation and Computer I/O Design by Garrett (both by IEEE Press). I have found the PC-Based Instrumentation and Control by Tooley and The Art of Digital Audio by Watkinson far more useful for practical applications. A paragraph on sensor fusion and a mention of the latest advances in commercially available digital sampling oscilloscopes such as the Agilent Technology `Infiniium' family or the Tektronix 8000 Series, which should be soon compatible with LABVIEW software, would have also been useful.

Overall, I found this pocket book of limited practical use, even to first-year undergraduate students or technicians at the start of their careers, when selecting components from RS or Farnell catalogues for small projects.

Although many measurement techniques are based on the interaction of light with matter, electromagnetism and microwave techniques are not covered in this pocket book. Furthermore, astrophysical techniques, or techniques used in particle physics, instrumentation for biology, plasma physics and nuclear science are not mentioned at all.

Currently, there is a plethora of practical books on analogue electronics that a practising engineer or technician may consult, such as Horowitz and Hill's Art of Electronics, the book A Practical Introduction to Electronic Circuits by M H Jones or the series of pocket books by R M Marston, which will provide solutions to most practical problems.

More detailed engineering solutions can be found in most Application Notes published by semiconductor companies such as Analog Devices, National Semiconductors etc, or instrumentation companies such as Hewlett-Packard, Anritsu-Wiltron and Tektronix at the component or system level respectively. These are far more comprehensive and useful to engineers and practising technicians.

A pocket book on `Instrumentation & Measurement' should be a distillation of existing measurement techniques described in a concise and practical way. My concern is that this pocket book tries to cover too many topics in a very short space.

Although not pocket books, the Concise Encyclopedia of Measurement & Instrumentation edited by Finkelstein and Grattan (Pergamon Press), the Electrical Engineering Handbook edited by Dorf (CRC Press), the Electronic Instrument Handbook edited by Coombs, and Doebelin's Measurement Systems are excellent examples of complete and useful reference books available at the moment.

Sillas Hadjiloucas

1607

Optics of Nanostructured Materials covers a selection of advanced research topics that deal with the optical properties of disordered materials including fractals, clusters, nanocomposites, aggregates and semiconductor nanostructures, and those of ordered artificial systems like the photonic band-gap crystals and the photonic crystal fibres. Each chapter of this book is written by leading scientists in their own field. The format is that of a review article, which first places the field in context and then gives a detailed and in-depth description of recent developments in that field. The chapters are well documented with an extensive bibliography, which will make the book valuable for advanced graduate students and researchers who are interested in learning about a given topic and getting an overview of the latest results.

The first two chapters deal with photonic crystals with an emphasis on the concept of photonic band-gap and the theoretical methods used to describe the electromagnetic waves in these materials. Recent achievements in this field are described: they include a discussion of defects in artificially made 3D photonic crystals and their contribution to the transmission of the waves when absorption is accounted for. Applications of photonic crystals to waveguides and resonant cavity antennas are described. A separate chapter is devoted to the fabrication of photonic crystal fibres and a description of their intriguing and sometimes counterintuitive waveguiding properties. Two chapters are devoted to near-field optics. The first one gives an exhaustive and comprehensive presentation of both the main principles of this technique with its various configurations and the challenging aspects of the theoretical description of near-field optical phenomena. Several illustrations of these phenomena are described in detail: light confinement by phase conjugation of optical near-field, localization of surface plasmon polaritons by surface roughness and the observation of localized dipolar excitations. The second chapter deals with near-field optics in semiconductor heterostructures and nanostructures. A microscopic theory of the optical properties of semiconductor structures is presented when either detection or excitation (or both) is performed in the near-field. The excitation of a sample through an aperture placed above or on a semiconductor surface is treated in detail to illustrate the specificity of near-field versus far-field optical studies. The use of near-field techniques to study localized excitons in disordered heterostructures and quantum dots is then reviewed with an emphasis on the distortion of the radiative properties of excitons under these observation conditions and on wave packet dynamics of coherently excited excitons.

Semiconductor nanostructures are described in two separate chapters reviewing the optical and electronic properties of quantum wires and quantum dots. The chapter devoted to quantum wires places a large emphasis on their optoelectronic properties and their potential device applications. A complete section describes a calculation of third-order nonlinear optical susceptibility resulting from excitonic and biexcitonic contributions. The effect of a magnetic field on the binding energy of both the excitons and the biexcitons is also presented with the details of the calculation. The chapter on semiconductor quantum dots starts with a review of the principal fabrication techniques as many approaches have been used to produce quantum dots with different charcteristics. A section on optical devices describes specific applications where quantum dots present an advantage over conventional semiconductor heterostructures. A final section describes the development of quantum logic gates and the limitations imposed by quantum coherence on their operation.

The remaining six chapters of this book cover a classical description of the optical properties of various nanostructures. One chapter deals with the occurrence of light localization in three-dimensional disordered dielectrics. The theoretical model is developed completely and new aspects of the Anderson localization of electromagnetic waves in 3D random dielectric media are unravelled. A chapter on random metal-dielectric films deals with the optical properties of these films and a theoretical understanding of anomalous optical phenomena that are found in the near IR and microwave range of the EM spectrum. Through detailed computer simulations giant fluctuations of the electric and magnetic fields are found near the percolation threshold. A chapter on the optical nonlinearities in metal colloidal solutions reviews various aspects of experimental studies of metal fractal aggregates. A theoretical description of the linear and nonlinear optical properties of disordered clusters and nanocomposites is described in a separate chapter with an emphasis on photoprocesses and their enhancement in clusters. Two chapters are dedicated to the optical properties of fractal smoke or soot aggregates. One emphasizes the geometrical aspects and the structure of soot aggregates that modify the optical properties of these soot particles when they coagulate in large aerosol structures. The other one presents a theoretical approach to the calculation of the optical properties of fractal smoke assuming a simple fractal structure that is far less complex than the structure of soot aggregates found in aerosols.

Daniel Oberli

1608

The field of microelectromechanical systems (MEMS) and microtechnology has grown so wide and large that it has become logical and reasonable to focus more on selected areas rather than trying to cover the entire field in one textbook. This new book Mechanical Microsensors is an example of such a special focus. The authors, Professor Miko Elwenspoek and Dr Remco Wiegerink, are heading and participating in very successful microsystem activities at the Twente University of Technology in the Netherlands. Although the scope of the book, from a sensor point of view, is limited to describing silicon sensors for pressure, force, acceleration, angular rate and fluid flow, about one third of the book also gives very good basic information on the mechanics and micromachining technologies which are used to make not only sensors but also other silicon MEMS. The theoretical chapters about the mechanics and physical sensor principles of the sensor structures present equations ranging from basic general relations to final, useful, engineering equations for typical silicon structures.

In particluar, the chapter on flow sensors contains a thorough review of the fundamental theory as well as a state-of-the-art presentation of realized silicon flow sensors based on different principles. Special attention is given to resonant sensors in the different sensor chapters as well as in a separate chapter. Of particular value is the fact that the authors go further than the description of the silicon sensor elements and also present solutions on how to interface these sensors to the surrounding world - electronically in the chapter on `Electronic Interfacing' as well as physically in the chapter on `Packaging'.

To summarize, this textbook gives a comprehensive overview of mechancial microsensors which is especially well suited for students on courses in mechanical microsensors, but is also valuable for people in research and industry with an interest in this exciting and growing field.

Göran Stemme

1608

Notwithstanding the title of Infrared Detectors and Emitters: Materials and Devices, the major theme of this book is infrared detection, and in particular the varied materials and device technologies which compete to provide solid state detector arrays for high performance infrared imaging.

One of the advantages of a book of this kind is the opportunity to compare, in a single volume, the relative merits and maturity of such a broad spectrum of infrared materials. The book deals comprehensively with the mainstream technologies of mercury cadmium telluride, indium antimonide, metal silicides, quantum wells and uncooled pyroelectrics and microbolometers. In addition, we have individual chapters on some aspiring alternatives such as lead chalcogenides, thallium-based compounds, the zinc and manganese alloys of HgTe, and low-dimensional structures using InAsSb and HgTe-CdTe.

At times the enthusiasm and rivalry among the authors shows through in the claims made for their respective technologies. For example, in chapter 7 we learn that `the largest single array in use today' is a 1024×1024 element InSb array. Later, in chapter 12, we learn `the largest reported to date' is a 2048×2048 MCT array! Enthusiasm also accounts for the inclusion of topics on magnetic field sensors and transistors, which are outside the scope of the book's title.

The black-and-white photographic reproductions in this book, though adequate, are small and somewhat grainy and do not do full justice to the original subjects.

The book provides a useful, compact reference work for the infrared specialist and a good tutorial for the non-specialist, particularly through the two introductory chapters on the operating principles and assessment of devices, and theoretical sections dispersed amongst the other chapters. Chapters 1 and 2 are recommended reading before full immersion in the later chapters, which are all by well-known experts in their chosen fields. The chapters on uncooled microbolometer arrays by R A Wood and on photovoltaic detectors in MCT by M B Reine are especially lucid and comprehensive.

Peter Knowles

1609

We've all heard of `sick-building' syndrome and the misery this can inflict in the workplace in terms of poor health and lost production. The notion of the Intelligent Building is the modern civil engineer's Big Idea in tackling these and other such deficiencies. The intelligent building can adapt itself to maintain an optimized environment. This ability relies on sensors as a front-line technology - the subject of this volume.

Gassmann and Meixner tackle the subject matter by using five categories of intelligent building technology: energy and HVAC (heating, ventilation and air conditioning), information and transportation, safety and security, maintenance, and facility management. These categories of home and building technology are intended to encompass domestic as well as workplace and public environments, but as the introduction states, the breakthrough into the domestic market is not quite here yet.

They have targeted, successfully in this reviewer's opinion, the researcher and designer in the field who has his or her own specific interest in sensor issues.

Each section of the book contains a number of articles contributed from predominantly European, particularly German and Swiss, research institutions. They cover subjects as diverse as inflatable buildings and a combination of a sensing chair and sensing floor that allows the exact interaction of a human user with his/her immediate surroundings to be integrated within an information system.

A fascinating item on biometric access security systems brings to life the `spy-thriller' world of automatic iris and fingerprint scanners as a means of secure access to key-less buildings. The discussion extends to threats to such systems whereby `replay' attacks, for example presenting a face recognition system with a photograph of an individual, reveal the necessary further steps if the technology is to be considered safe.

Inevitably though, it is the massive strides in communications and processing technology that are visible in the contributions to this volume that point to where most of the advances will come. The massive interconnection issue of numerous sensors, the `nervous system' of the building is approached in items covering the complexities of fieldbus systems and the use of `wireless' in-building networks.

This well presented volume with good quality illustration ends in an investigation of system technologies for the automated home of the near future. Presence-controlled heating and air-conditioning, telemonitoring of the sick or elderly, gas and fire detection and of course a high data rate communication backbone to link everything: all these could feature in the future household.

We shall see.

Peter Foote

1609

Devices for use in optical communication systems have experienced a massive increase in attention over the past few years. In the mid 1990s there was a feeling around in many quarters that the industry had available most of the components it needed for high capacity transmission systems, and apart from some predictable developments, the field had become mature. This situation changed very rapidly with the growth of traffic and lower transmission costs needed for the internet. The result was the introduction of higher modulation speeds, longer transmission distances and wavelength division multiplexing (WDM), together with the widespread use of erbium-doped fibre amplifiers, and the potential for new architectures including all-optical networks. All this activity has brought a huge focus and pressure on the performance of the components required to implement the systems, and a substantial gearing-up of their production.

Fibre Optic Communication Devices covers this very fertile field by describing the enabling components, particularly the fundamentals of their operation. Starting with a clear description of the architectures and technology of long-haul and, more thoroughly, optical WDM networking, the basic functional elements are covered in enough detail to provide a framework for the later sections on specific devices. The body of the book then covers fibres, transmitters, detectors, amplifiers, integrated optics, wavelength-selective devices, optical switching, hybrid and monolithic integrated optoelectronic circuits.

With this breadth, it is obviously necessary to limit the scope of treatment on each topic and, in my opinion, the book succeeds well. Since the book is a set of chapters by different authors, there are differences in style, although the general approach is fairly consistent, with a brief coverage of the basic principles, a description of device technology and characteristics, and some thoughts on future developments. An important factor is that devices are mainly seen through their performance as it relates to their system applications. Not surprisingly, given the authors, there is a strongly European feel, with some of the demonstrators described having been developed on various EC collaborative projects. The book is well presented, readable and adequately up-to-date, with the most recent references in the late 1990s, and at least mentions of most of the `hot' topics that are currently exciting us. A single area that I would pick out as not adequately represented is that of modulators: given their importance in transmitters for high capacity systems, they deserve a more thorough treatment.

The stated objective of the book is to attract the attention of experts working in the field, as well as appealing to interested newcomers. This is a difficult compromise that works reasonably well. However, it inevitably falls down sometimes, in that the level of mathematical detail, in for example the sections on DFB lasers, would soon be inadequate for an expert working on the modelling of lasers, but would represent too steep a learning curve for the interested newcomer, or someone more engaged in the fabrication aspects. The book would best suit someone who may be becoming expert in one of the topics covered, who either wants an introduction to a nearby related topic or finds value in the comprehensive nature of the surrounding material. The very extensive list of references enhances the value of the book in that role, and for this reason alone it should find a place on many bookshelves.

Peter R Selway

1609

This is an ambitious book aimed at introducing the relatively new concepts and possibilities of optical fibre sensors for structural monitoring to the uninitiated who have an engineering or general physics background.

Measures draws the reader into the volume with a description of smart structures - the structural monitoring equivalent of artificial nervous systems - before a series of back-to-basics tutorials on optical theory and photonic technology.

The emphasis on smart structures early in the book is a worthy attention-grabber since it elevates the subject of structural health monitoring above just another set of techniques for making engineering measurements. The promise is to `revolutionize engineering design philosophy' by creating `intelligence within otherwise inanimate structures'.

In the latter two thirds of the book, the author steps through the main issues of structural monitoring using fibre optic sensors. Intensity-based, interferometric, polarimetric and spectral sensors (including the ubiquitous Bragg grating) are compared and contrasted. The hot topic of strain versus temperature discrimination in fibre sensors earns a whole chapter and several useful techniques for overcoming this cross-sensitivity are portrayed.

Installation of sensors is also discussed with reference to retro-fit and co-manufacturing (embedding) approaches. Examples of concrete constructions such as bridges (a frequent theme in the book) and fibre-reinforced plastics such as glass-fibre and carbon composite materials are considered.

A chapter on `short-gauge' sensors and applications deals in some depth with the Bragg grating as a strain sensor. The methods of multiplexing and interrogating these devices are explored with many examples from both Measures' own research and the work of other groups worldwide. The Beddington Trail bridge trial in Calgary, one of the first such installations of Bragg gratings, followed by the more ambitious Confederation Bridge, also in Canada, provide concrete examples of the technology's application.

The material is marred somewhat by the inferior reproduction of some of the photographs, especially those showing field installations of the optical sensors.

Other applications are not neglected. A description of trials aboard a Norwegian Naval vessel with composite hull monitored by Bragg gratings is also given.

Interferometric sensors in similar applications trials are also covered in chapters on short and long gauge length devices.

Distributed strain and temperature sensing techniques using Fourier transform, low coherence and stimulated backscattering are covered in the penultimate chapter, which draws together distributed measurement at a small physical scale in the form of intra-Bragg grating strain profile measurements (on the scale of millimetres) and measurements over kilometres using stimulated Brillouin scattering.

In this reviewer's opinion the book dwells on strain monitoring in civil engineering structures at the expense of a broader scope, which could have included, for example, the detection of impacts or the acoustic emissions from crack propagation and other forms of structural damage.

Nevertheless, this volume is an impressive collection of background and examples of real applications in heavyweight engineering. It adds significantly to the claim that fibre optic sensors have at last arrived.

Peter Foote

1610

This book intends to cover the physical and technical aspects of computed tomography (CT) and is directed to an interdisciplinary readership. As the author states, there is no requirement for special prior knowledge.

After a brief historical overview, the author covers the principles of CT, discusses the technical concepts associated with scanner gantry, radiation source and detector systems, and briefly highlights the various modes of operation. He then continues to discuss spiral CT in greater detail, addresses image quality, particularly also in relationship with spiral CT, and dedicates a whole chapter to radiation dose. The next two chapters deal with processing and visualization of images, especially the representation of three-dimensional data sets, and with special CT applications like quantitative CT and imaging of the heart. The last chapter presents an outlook into the future of CT. The appendix, finally, presents a brief development of the convolution/backprojection reconstruction algorithm in two and three dimensions.

The book is well written and illustrated. A nice feature is the included CD-ROM, which contains a digital copy of all figures used in the book, additional clinical examples and a sample copy of a dose-calculation program and a volume-CT visualization program. Particularly illustrative are the movie clips showing the process of backprojection.

The book represents a nice introduction into the field of CT. A number of areas are treated more in depth, and the selection of these areas reflects mainly the author's own involvement in CT research. It is, therefore, not surprising that more than one third of all references relate to work performed in the author's laboratory, either during his time at Siemens or, more recently, at the Institute of Medical Physics at the University of Erlangen-Nuremberg. Most examples relating to CT scanners involve the Siemens brand. In the spirit of fairness, one might have expected more citations of work done by other recognized researchers in the field and by other manufacturers of CT scanners.

For a student of CT, this book represents a good start. Many issues related to present-day CT are well described and illustrated. For a more in-depth and quantitative review, as might be desired for a person starting to do research work in this area, the references given are a good way to proceed. However, in a number of areas (like corrections for beam hardening, scatter, exponential edge- gradient effect etc) such references are missing. Also, it is doubtful whether the appendix, dedicated to explaining the reconstruction algorithm, goes into sufficient detail to help a student to appreciate the underlying fundamentals of the reconstruction formula.

The strength of this book is the consideration of known issues in the context of spiral CT. The author is undoubtedly one of the best-qualified persons to describe these issues for us. For this alone, it is worthwhile to buy the book, which should be mandatory reading for anybody seriously interested in the issues of modern-day CT scanners.

Thomas N Hangartner

1610

This is a well written book aimed at scientists who are interested in learning about the theory behind many of the common statistical tests and methods used in the analysis of experimental data.

The book focuses on the use of statistics for the measurement and analysis of errors and uncertainties in experiments. It only considers the classical theory of statistics; the alternative, but equally valid, view proposed by Bayesian theory is not considered.

All the standard parametric statistical tests used in the basic analysis of experimental data are explained and developed from first principles although some of the more complicated proofs are omitted. In addition, the origins and relationships between the common distributions - Normal, Chi-squared, Beta, Poisson, Exponential and Binomial - are explained, as is the theory of maximum likelihood.

A set of examples based around classical laboratory experiments are used to illustrate the statistical tests and theories. These are very useful in enabling the reader to see the context in which the tests are typically used. It would be of benefit to many scientists if more statistical texts adopted this style.

The text reads easily with the author following through each point in a clear and logical order; the many diagrams are also helpful.

The book claims to be suitable for those new to statistics. However, I think that this is being too optimistic, and some prior experience of the field is desirable. Also the potential reader should have a fairly good knowledge of mathematics, and in particular the field of differential calculus.

R Haylock