Table of contents

Volume 13

Number 9, September 2002

Previous issue Next issue

REVIEW ARTICLE

R85

and

This article describes the principles and major applications of digital recording and numerical reconstruction of holograms (digital holography).

Digital holography became feasible since charged coupled devices (CCDs) with suitable numbers and sizes of pixels and computers with sufficient speed became available. The Fresnel or Fourier holograms are recorded directly by the CCD and stored digitally. No film material involving wet-chemical or other processing is necessary. The reconstruction of the wavefield, which is done optically by illumination of a hologram, is performed by numerical methods. The numerical reconstruction process is based on the Fresnel–Kirchhoff integral, which describes the diffraction of the reconstructing wave at the micro-structure of the hologram.

In the numerical reconstruction process not only the intensity, but also the phase distribution of the stored wavefield can be computed from the digital hologram. This offers new possibilities for a variety of applications. Digital holography is applied to measure shape and surface deformation of opaque bodies and refractive index fields within transparent media. Further applications are imaging and microscopy, where it is advantageous to refocus the area under investigation by numerical methods.

PAPERS

1367

, and

Acceleration can be measured using a laser Doppler system by estimating the change of signal frequency with time. This acceleration corresponds directly to the material derivative of flow velocity, and is an important quantity in the formulation of Lagrangian stochastic models for simulating turbulent flows in which transport is of primary interest. A non-parametric (FFT based) and a direct parametric estimator are introduced for estimating the acceleration parameter from Doppler signals. The estimators' performance is evaluated by examining their bias and comparing their variance to the Cramér-Rao lower bound. The non-parametric estimator with a window function proves to be accurate, robust and computationally fast. Validation experiments are performed in a laminar stagnation flow and measurements in a turbulent jet are used to demonstrate the application of the new estimators. The numerical values of the measured acceleration are high, but agree well with expectations.

1382

and

Many mechanical and optical components contain step features whose surface height changes far exceed the optical wavelength. Therefore, this work presents an interferometer based on variable synthetic wavelength interferometry (VSWI) and differential heterodyne configuration to measure large step heights directly and unambiguously. This largely common-path configuration can substantially reduce the influence of environmental disturbances, which are the main sources of error in the VSWI. Only one external cavity diode laser (ECDL) is employed to synthesize a series of synthetic wavelengths in descending order. The wavelengths are combinations of the varied wavelengths and the initial wavelength of the ECDL. In contrast to wavelength scanning interferometry, this method does not require the laser wavelength to be continuously tuned. The step height is sequentially measured at these synthetic wavelengths and a lock-in amplifier resolves the corresponding synthetic fractional fringes. The step height is determined following a succession of optical path difference calculations, in terms of the synthetic wavelengths and measured synthetic fractional fringes. Three known step heights, verified by a gauge block interferometer, were used to confirm the performance of the proposed system. The results reveal that the uncertainty in the measurement is approximately 80 nm when the measured height is up to 25 mm.

1388

A binocular stereoscopic x-ray imaging technique based on a theory specifically developed for a single x-ray source and a pair of folded linear dual-energy x-ray detector arrays is presented. The theoretical analysis is discussed in terms of an ideal x-ray source and x-ray detectors, and has formed the basis for the development of an experimental system. The resultant images may be used for both visual inspection and for the extraction of three-dimensional coordinate information.

1398

, , and

In situ scanning photoelectrochemical (PEC) microscopy is a technique for mapping the photocurrent stimulated by a focused light at an electrode-electrolyte interface. This technique gives position-sensitive information on the nature of passivating films (composition, type and degree of crystallization, thickness, etc) that cover metal electrodes. In the analysis of PEC images, further aspects related to the surface shape and/or surface irregularities (roughness, slope errors, surface waviness, etc) have to be taken into account. In this paper we present the effects of non-planar surfaces by measuring the photoresponse of passivating oxide films on cylindrical iron rods with different diameters. The variation of the angle of incidence of the light, as the laser beam scans the surface of the rod, implies either a change in the spot size and different optical response of the oxide-metal system, such as the transmittivity of the oxide, the reflectivity of the metal and the light path inside the film. For evaluating the behaviour of the photocurrent, we have simulated these geometrical and optical effects by using a simple model in geometrical optics approximation. Images have been also acquired at various distances from the beam focus for highlighting the degradation of the spatial resolution induced by the focusing misalignment of the curved samples.

1404

The work described in the present paper will deal with the mathematical and empirical analysis of the properties of different ways of performing the reduction of data gathered through a complex measurement instrument; the specific case of a probe for the time-resolved measurement of skin friction absolute value and direction will be studied in order to deduce a general conclusion about the matter.

It will be shown that different mathematical formulations of the data reduction variables and/or functions can cause important differences in the results, especially when the features of the instrument to be used must be pushed to their limit.

Comparison will be performed between various techniques of data reduction in order to highlight their respective properties; these results will be interpreted in relation to the physical nature and properties of the example instrument, showing the underlying factors leading to these properties and the way this understanding can lead to better applications of an instrument.

1414

, , , , and

An experiment has been constructed to determine the rovibrational states populated in the formation of H2 on the surface of cosmic dust under conditions approaching those of the interstellar medium (ISM). During the experiment, a beam of atomic hydrogen of controlled temperature is incident upon a target which is an analogue of cosmic dust. Molecular hydrogen desorbing from the target's surface is ionized using (2 + 1) resonance enhanced multiphoton ionization and detected using time-of-flight mass spectrometry. The experiment allows the rovibrational populations of the H2 molecules desorbing from the cosmic dust targets to be determined providing information on the energy budget of the H2 formation process in the ISM. Preliminary results from the experiment, to prove its viability, show that H2 molecules formed on an highly oriented pyrolytic graphite surface have a measurable population of excited vibrational and rotational states.

1425

, , , and

In multi-frequency electrical impedance tomography (EIT) systems it is much easier to design voltage sources than current sources. It is also at times desirable to operate a voltage driving system as if it were a current driving system. This can be done by adjusting the individual voltage sources until a desired current pattern is obtained. Questions remain regarding the form of the adjustment algorithm and the circumstances under which it will converge. Through simulation and experimentation we have developed a simple algorithm which functions satisfactorily in most practical situations. We have also investigated its theoretical limits for convergence. Simulations showed that convergence is reached in all cases with little (0.1%) or no measurement noise. With moderate noise (0.5%) our algorithm failed to converge in 17 out of 6000 runs, while more significant measurement noise (1%) resulted in convergence in only 7 out of 6000 runs. Experiments with a 16-channel EIT system converged in all cases on volunteer arms, but failed some of the time in saline, with the number of successful runs decreasing with frequency (73.3% at 10 kHz, 71.1% at 125 kHz and 15.5% at 750 kHz), suggesting a possible link to measurement error.

1431

, , and

We present a simple technique for obtaining the time-resolved ion energy distribution function (IEDF) at a boundary in pulsed plasmas using a commercial quadrupole mass energy analyser. In this technique, ions are extracted from the plasma at selected parts of the pulse cycle, through the synchronized electrical biasing of a grid assembly attached to the barrel of the instrument, forming an electrostatic shutter. This sampling method has the advantage over the normal technique of electronically gating the detected ion signal to achieve time resolution, since the IEDFs can be obtained even when the ion flight time through the instrument (typically 100 µs) is greater than the pulse period or the characteristic time of transients in the plasma under investigation. The arrangement allows us therefore to diagnose plasmas pulsed at high frequencies (≥10 kHz). Presently, a time resolution of 4 µs can be obtained, limited only by the driving electronics design. The technique has been tested on a DC magnetron discharge operated in argon. The plasma was pulsed at a low frequency of 2 kHz, but with a discharge voltage waveform containing fast transients (on the µs time scale or faster). The results show clearly the evolution of the IEDFs during the pulse, responding to these fast transients with a significant number of ions created at plasma potentials above 140 V. These time-evolved IEDFs cannot be obtained using the conventional, manufacturer's time-resolved method for this instrument. However, the new technique does introduce a small distortion in the measured IEDFs at energies above 120 eV, which is always observed, irrespective of the position of the shutter window during the pulse. This is due to the transient nature of the discriminating grid bias used in the electrostatic shuttering. Its effects and possible elimination are discussed.

1437

, and

We propose a method to obtain the dynamic specific heat at hundreds of microhertz and higher. This low-frequency measurement is needed in order to gather the data covering the wide frequency dispersion of the dynamic specific heat in the temperature regions of the first-order phase transitions. By using the method, we can use medium-size cylindrical samples, the radius and the thickness of which are several millimetres. Although a sample of this size has the advantage of being easily handled, systematic measurement and analysis using a millimetre-size sample to obtain the low-frequency dynamic specific heat has not been attempted before. We introduce the results of the dynamic specific heat and thermal conductivity at the frequencies from 0.0003 to 0.05 Hz obtained by using the method.

1446

, , and

We report the development of a fibre Bragg grating (FBG) interrogation system that is capable of detecting independently and simultaneously the two orthogonally polarized signals reflected from a polarization-maintaining (PM) FBG. This relaxes the constraints on the bandwidth of the FBG spectrum, allowing the use of a shorter PM FBG (larger bandwidth) for sensing applications where a higher spatial resolution is required. It imposes no limitation on the measurable strain range and strain profile. The interrogation system is capable of decoding the complex spectral responses of a PM FBG subjected to a transverse load.

1450

, , and

This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5% and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant.

1458

, and

In this paper we analyse digitally generated fractal images added to periodic patterns using the variance of grey levels with four parameters. The variogram shows a linear fit at low scale and a multiple periodic behaviour at high scale. The image texture is characterized by the fractal dimension D at low scale and two textural parameters at high scale: dmin and dper. The intersection of the straight line that fits the fractal region and the tangent line at the maximum of the periodic region determines the dmin parameter; it accounts for the smallest cell size with enough statistical weight to produce periods. The dper parameter approaches the average distance between the nearest cells with enough statistical weight to produce periods. The characterization is completed with the study of the image anisotropies through changes under rotation in fractal dimension D and topothesy Λ.

This analysis is applied to SEM images of CSi emery paper particles of different grades and origin. The results of dmin compare satisfactorily with dimensional characteristics of particle sizes, according to Federation of European Producers of Abrasives and ISO 6344/1 and 6344/3 standards. The dper values are congruent with the average distance between particles for different grades of paper, thus making possible the evaluation of different qualities.

1467

and

A novel optical technique to measure condensate thickness on surfaces has been developed. Surface reflectance measurements in the far-infrared spectrum correlate well with the quantity of condensate mass on the surface. An empirical relation has been developed that provides an optical measurement of the local condensate thickness from 0 to 5 µm on a reflective surface as a function of time and location. Measurements of the mass transfer coefficient in a two-dimensional wall jet flow with the optical infrared reflectance technique agree with steady-state mass transfer measurements for the same flow. Mass transfer results also agree with those reported in the literature for similar flows.

1475

and

A proficiency testing is carried out for electromagnetic radiated emission (RE) measurements in the frequency range of 30–1000 MHz. A spherical dipole radiator (SDR) is employed as a reference emitting source. Using the SDR, the RE measurements are taken for horizontal and vertical polarizations at 12 open area test sites and the measurement distance is 10 m. To evaluate the proficiency of each test laboratory, the robust Z score and En ratio methods are used. In order to use the En ratio method, the measurement uncertainty of each test laboratory is evaluated in accordance with the International Organization for Standardization guide to the expression of uncertainty in measurement. Results show that decision on a proficiency testing depends on the evaluation method used.

1482

and

As both experimental and theoretical studies in atomic physics move towards more complex atoms and more sophisticated methods, the design and characterization of atomic beam sources is becoming important as specific requirements emerge. This is especially the case when both electron impact and laser excitation are combined. The design of an oven is presented here together with characterization of the atomic beam parameters of importance to both electron impact and laser excitation experiments. The oven is designed and tested using calcium but is relevant for a range of atoms. In this study the oven was tested at temperatures up to 710 °C.

1488

, and

This paper presents a mathematical model which enables the velocity vectors and diameters of spherical droplets (or bubbles) in bubbly two-phase flows to be determined from the outputs of a local four-sensor intrusive probe. Each of the four sensors functions by measuring the fluid conductivity at its tip and so use of the probe is relevant to flows where there is a contrast in the electrical conductivity of the continuous and dispersed phases. The motion of a non-conducting spherical droplet has been simulated as it moves across a four-sensor probe, which has a leading sensor and three rear sensors in an orthogonal arrangement. The technique described in this paper relies upon measuring the time intervals between the droplet surface first contacting the leading sensor and then coming into contact with each of the three rear sensors. Assuming that the surface of the droplet comes into contact twice with each of the rear sensors, as the droplet moves across the probe, there will be six such time intervals. It has been shown that in order to obtain the droplet velocity components in the x, y and z directions with an accuracy of ±2%, the six time delays must all be measured with an accuracy of ±10 µs. It has been shown that the probe dimensions are critical to the measurement technique and that the separation of the four sensors should be of the order of 1 mm. It has also been shown that if the droplets are oblate spheroids rather than spheres then, provided their aspect ratio is greater than 0.75, the magnitude of the additional error in the estimates of the droplet velocity components is less than 10.5%. No account has been taken of the influence of the probe on either the shape or the motion of the droplet.

BOOK REVIEWS

1499

and

This monograph presents an introduction to the current status and future potential in the application of millimetre wavelength spectrometry to the quantitative analysis of gaseous mixtures. It will therefore be of interest for chemists and other people working in this field or for those who want to start working there.

Within the spectrum of electromagnetic waves, the millimetre wave range is located between the microwave range and the far-infrared spectral region. The former may be characterized by the use of technical components like solid state devices as sources, mixers or detectors and by a circuit network (waveguides). For the far-infrared region, on the other hand, it is typical to use optical components like interferometers (Fabry-Perot or Michelson) and plane, spherical or even aspherical mirrors. In the millimetre wave range, at the boundary between these two, it is customary to employ a mixture of both techniques. This is also demonstrated in this book with the description of millimetre wave spectrometers and their components where the authors prefer the language and terminology of microwave techniques to that of optics. With respect to the physics at millimetre wavelengths, one can observe magnetic excitations in solids, e.g. antiferromagnetic resonance, and rotational transitions in gases. The monograph discussed here is devoted to the latter processes and to a very accurate measurement of the absorption lines due to rotational transitions in order to determine the concentration of the gas under investigation, i.e. to perform a quantitative analysis.

Consequently, it is helpful and convenient for the reader that Chapter 1 of this monograph starts with a brief and concise introduction to the theory of the interaction of millimetre wave radiation with gases including the basic spectroscopic facts and the various mechanisms of line broadening in gases. Finally, the theoretical considerations lead to the different line profiles, to the line intensity, i.e. the area under an absorption line, and to the peak absorption coefficient αmax. The absorption coefficient occurs rather frequently in the text but the reader will be irritated by its notation changing from the normal Greek letter α to γ on pages 2, 68 and 69 without any explanation. It is even more confusing that γmax appears in equation (4.8) with reference to equation (4.2) where the peak absorption coefficient is written as αmax.

In Chapters 2, 3 and 5, an overview is presented of the components for millimetre wave spectrometry, such as Fabry-Perot interferometers, sources or detectors, and of commercially available spectrometers. A more detailed discussion is provided of the millimetre wave spectrometer designed and built by the authors and of its components. The aim of this spectrometer is to be compact, low-cost, automatic and robust. A confocal Fabry-Perot interferometer (or cavity) is used as the sample cell in order to provide a sufficient effective path length for sensitive measurements on gases. The source is a frequency-stabilized Gunn oscillator with a YIG oscillator as intermediate frequency source. The source is frequency-modulated, and for phase-coherent detection of the signal a He-cooled InSb bolometer or a Schottky diode mixer is used. As far as the mechanical parts are concerned, i.e. the optical and technical components, this spectrometer looks relatively simple. However, great effort was required for the electronics and the control devices of this instrument. The reader learns, for example, from the explanations in Chapters 2 and 6 that it is necessary to drive one of the interferometer mirrors synchronously to the frequency modulation of the source by means of a piezoelectric actuator in order to keep the interferometer always in resonance in the course of the frequency changes due to the modulation of the source.

The performance of the authors' spectrometer and some of the results obtained with it are summarized in Chapters 4 and 6. From the discussion of the various mechanisms of line broadening and the resulting absorption line profiles, e.g. Lorentzians, Gaussians or combinations of both, it becomes obvious to the reader that the dependence of the absorption line intensities on the concentration is strictly linear while the corresponding data for the peak absorption value exhibit a slight curvature. This is demonstrated in figure 4.1 for nitrous oxide (N2O) in air. Many other experimental data provide convincing evidence of the capability of this millimetre wave spectrometer for the quantitative analysis of gas mixtures. Moreover, digital modulation techniques are also presented in Sections 4.1 and 4.2, replacing the sinusoidal frequency modulation, and these allow simpler expressions to be obtained for the spectral signal.

In conclusion, this monograph is really an excellent up-to-date introduction to the field of millimetre wave spectrometry and to the quantitative analysis of gases by means of measurements in this spectral range. Most details are based on the authors' own experience with their home-made spectrometer, but other approaches to the solution of the problems are also discussed. Therefore, this monograph can be recommended for all scientists interested in this field, working there or starting to work there. That means not only chemists but also physicists and scientists involved in environmental problems. Of course, this compact monograph (118 pages) cannot deal with all technical details, but the interested reader will most probably find an answer to his questions in the references at the end of each chapter.

Reinhart Geick

1499

and

The aim of the book is to provide a summary of the various facets of mass measurement, and it is aimed at the highest level of mass metrology as well as industrial and commercial users. The book is a mix of the general principles of mass metrology as summarized by the authors and a collection of specific experimental work carried out in some cases by the authors and in some cases lifted almost directly from the work of others (in all cases the sources of the original work are referenced). The eclectic nature of the book makes it quite difficult to read as a continuous text but it is an invaluable collection of data, some of which was not previously available, the rest only being available in individual published papers.

Chapter 1 provides an introduction to the field of mass metrology. Despite getting off to a poor start, misquoting the date of the first CIPM meeting (which should be 1889 not 1899) and identifying the Pavillon de Breteuil (the home of the Kilogram) rather loosely as `a building at BIPM', the chapter provides a useful introduction with some interesting historical detail.

Chapters 2 to 4 deal with the maintenance and dissemination of the unit of mass and consist mostly of a summary of data previously published by NIST (National Institute of Standards and Technology, USA) and the BIPM (Bureau International des Poids et Mesures).

Chapters 5 to 27 deal with various aspects of balance construction and usage and the calibration of mass standards. The information given again has a strong bias towards the procedures used by NIST. Most relevant areas of the mass measurement process are addressed but more detail in certain areas would have been welcome. In particular, a great deal of research has been done into the areas of weight cleaning and storage, and thermal effects and magnetic interaction between balances and weights, little of which is referenced in this book. Areas such as the density of water and air are covered in detail and the uncertainty analysis of the equation to calculate the density of air is particularly detailed and useful.

The book concludes with details of some research into specific applications of mass measurement, which are interesting and give a flavour of the more esoteric research that goes on in the mass metrology field. I would have liked to see some mention of the various projects being undertaken to redefine the unit of mass (the Avogadro, Watt balance and ion accumulation projects), as these will become important to the field over the next 10 to 20 years.

Overall the book gives a good introduction to the field of mass metrology albeit with a distinctly American slant. It covers all the major areas necessary for the calibration of mass standards in both an industrial and a research environment and, given the paucity of publications in this field, it is to be commended. In comparison with the only other text in the area, Comprehensive Mass Metrology edited by Kochsiek and Glaser (Wiley), this book does not have the depth of detail in the research areas and perhaps gives a more basic introduction for the industrial user, particularly for the American market.

Stuart Davidson

1500

The development of good material for any student course must involve a significant amount of interplay between the recipients and the presenter. There can be no doubt that the third edition of Professor Frieden's book carries the hallmark of such feedback. Not only have some previous discourses been honed but new material has been included, reflecting the ideas and developments in the subject material since the publication of the first edition 20 years ago.

The book is unique in the way that it deals with the principles of Probability and Statistics with a slant of application to problems in Optics. The opening chapters are a paragon of clarity in setting the scene for the presentation of the rich material that follows. They contain interesting historical notes presented with gentle humour and the basic definitions associated with experimentation, statistics and probability with the way in which they are interconnected.

Although many of the presentations of the statistical/probability concepts generally lead to discussions in the field of optics (e.g. the effects of atmospheric turbulence on the quality of a stellar image) many of the standard statistical tests, such as Chi-square, Student-t etc, are there. Also there are worked out exercises on everyday problems more related to amicable bar discussions on, for example, what is required to decide on whether the flip of a coin has a bias or what is the likelihood of people within a group sharing the same birthday.

Much of the material is, however, much deeper than this and, via its underlying theme, links are provided to some of the essentials of very basic physics, including the Heisenberg Uncertainty Principle and the Higgs mass phenomenon. The depth of the coverage is final-year undergraduate to postgraduate level. The philosophy of presentation is that a true understanding of the discipline is best obtained by direct participation through the working out of set problems. The text provides many worked examples with copious and interesting problems (without answers) for engagement. By dipping in, it is also possible to find insight on a particular statistical need that might be to hand and requiring solution.

Even with a text of concentrated discourse of some 500 pages, there are omissions. More might have been said perhaps on tests relating measurements to theory or prediction, say those of the Kolmogorov-Smirnov type. The layout is very clear, although the software package behind it has the quirk of sometimes carrying the last section header on the odd-numbered pages to far beyond what is included on those pages. All in all, a rare collection of ideas that stands alone, with unique style, and deserving to be called a true `classic'.

David Clarke

1500

The presented book is a well-established and highly rated work. It is a relatively rare example of a text that appeals simultaneously to finite element users and code developers. Its credentials as a postgraduate textbook are without doubt and I believe it is widely recommended as such. One great advantage of the book is that it explains the fundamentals of the finite element method from the very basics up to sufficiently great depth. The skilfully designed problems allow the reader to reach relatively effortlessly the required level of depth in knowledge.

It is essential to point out that the monograph fills an important market niche. It is focused on the theoretical soundness of computational techniques and the reliability of the theoretical assumptions. This is an essential service to the computational researchers in the days of almost total domination of `black box' commercial software that claims to provide solutions for virtually any existing applied mathematics problem without a hint of any potential problems or insufficient accuracy.

The text is strictly focused on elliptical differential equations that represent by far the most common problems in the applied mechanics field. The structure of the book allows the reader to strictly differentiate between theoretical models and computational methods for solving these problems. Both parts are equally well presented and very clear for reading. However, I would strongly recommend that the author adds a small paragraph at the end of the chapters referring to well-established and easy-to-understand textbooks for readers who are somewhat deficient in their knowledge of advanced mathematical issues such as norms, spaces etc. Such an addition would widen the appeal of the book and make it more approachable to more application-minded readers.

The second recommendation I would like to make is the use of bold font for vector and matrix variables, as at the moment it is not straightforward to recognize those variables.

In conclusion, I strongly recommend this book.

Peter Dabnichki

1500

, and

Modern Advanced Mathematics for Engineers is a comprehensive introductory level textbook covering a wide range of the mathematics necessary in modern engineering. The approach adopted in this volume is both systematic and rigorous, introducing first the formal language of mathematics in chapters on Set Theory, Mathematical Logic and Mappings. Building on this material, vector (linear) spaces are then introduced as well as other important algebraic structures. Matrices are introduced as a mapping between spaces so that, for example, eigenvectors are easily described as a form of transformation-invariant vector. Later chapters then go on to cover topics including metric and topological spaces, Fourier series, Fourier and Laplace transforms and finally partial differential equations.

Overall the book is well written in a relaxed and informal style that is particularly suited to first- or second-year undergraduates who are not yet completely comfortable with formal mathematics. Most topics are very well explained and a variety of informative examples and exercises are provided with each chapter. My only real concern with the volume is that its breadth means that it is not sufficiently focused for practical use in Engineering Mathematics courses. For instance, although the first four or five chapters cover material that is commonly found in discrete mathematics courses the chapters on logic and mappings are rather sparse. In particular, there is no real discussion of interpretations and models for the predicate calculus despite quantifiers being introduced. I would also comment that I find that the truth table definition of logical connectives is easier for students to grasp than the truth-functions approach adopted by the authors. Also, the chapter on relations includes no real details on partial orderings and lattices, which are important for modern information and knowledge engineering. Similarly, the later chapters of the book could be suitable for a first- or second-year mathematical methods course although the same material is covered in more depth elsewhere.

Despite these reservations, however, I think this is a useful book that makes an important contribution to the teaching of Engineering Mathematics.

Jonathan Lawry

1501

, , and

According to the introductory text, this book is written for designers of high performance, highly efficient digital smart sensors and data-acquisition systems. Smart stands for gathering the instrument functions sensing, interfacing, signal processing and intelligence on a single chip. The book focuses on smart sensors with a quasi-digital output (frequency, time period etc), which find expression in a number of chapters on signal conversion techniques yielding this type of output.

Chapter 1 reviews available smart sensors in the frequency-time (f-t) domain and shows the high performances of such sensor systems. Since most sensors produce an output voltage or a parameter change, conversion to the f-t domain is required, a subject that is covered by Chapter 2, with an extensive discussion on voltage-to-frequency conversion and a brief section on capacitance-to-period or duty-cycle conversion.

Multichannel sensor systems are addressed in Chapter 3. The two classical approaches are compared: time-division channelling and space-division channelling, and the implications with respect to cost, chip area and sensitivity to transmission errors are discussed. As a logical consequence, Chapter 4 continues with a discussion on frequency-to-code conversion techniques. Several standard counting methods are compared, and a brief error analysis is given. This discussion continues in Chapter 5, with a more in-depth performance analysis of various non-traditional and self-adapting counting methods. Then follows a short Chapter 6 on signal processing techniques in the quasi-digital domain, and a larger Chapter 7 devoted to program-oriented conversion methods concludes this main topic of the book.

The three final chapters address a variety of themes: multichannel sensing systems (car wheel speed sensing as an example of the material covered in Chapter 3), virtual instrumentation (also with examples from the automotive field), software design (on the instruction code level), instrument buses for smart sensing systems and network protocols. The very last topic concerns the smart interface - an IC enabling the interfacing of all kind of analogue sensors. Only two types are discussed: a UTI (universal transducer interface) in great detail and a TDC (time-to-digital converter) in a restricted way.

At the end of the book we find a list of 188 references (many of them in Russian so not generally accessible), a note on the Sensors Web Portal, a glossary of terms and a useful index.

In my view the book is of interest to the IC designer entering the field of smart sensors. In particular, it contains valuable information on signal conversion towards the frequency-time domain, a significant aspect of smart sensor design. Some other topics are discussed more cursorily, and should be considered as an invitation to consult more specific books on these subjects.

Paul Regtien

1501

This is a reissue in the Cambridge Mathematical Library series of a book first published in 1978. When it was first issued, it had enormous impact on all those working in fluid flows, and on waves in particular. I quote from the review by D C Pack, published in 1979 in the Journal of Fluid Mechanics (vol 90, pp 605-8): ``The book will become a standard work for reading and consultation by all interested in fluid flow. It will be invaluable to research students ... Even among the most experienced readers few, if any, will fail to find in it new insights ... comments and comparisons that illuminate the chapters.'' The same comments can be made today, because in spite of the two and a half decades that have now passed, it retains its immediacy and vitality.

The book is designed to develop the fundamental concepts of waves in fluids through an in-depth analysis, in each of four chapters, of four important and representative examples of waves in fluids, namely, sound waves, one-dimensional waves, water waves and internal waves. A major highlight of the book is the thorough treatment of group velocity and ray-tracing ideas, developed throughout the text and with a climax in chapter four. As a potential textbook, or simply as an invaluable research aid, it remains indispensable. However, it does not cover such major developments in the last two decades as solitons, and the role of canonical equations such as the Korteweg-de Vries equation and the nonlinear Schrödinger equation. Hence it will need to be used in conjunction with more recent texts which cover these topics.

R Grimshaw

1501

This book aims to provide a complete and authoritative summary of the latest developments in the area of smart sensors. The second edition has been updated to include the emerging IEEE 1451 standards for smart sensors. It provides a wide overview for engineers and scientists, working in the field of sensors and instrumentation, who wish to familiarize themselves with the latest trends. It will also be useful for other non-technical professionals requiring an insight into some of the most recent advances in sensor technology and future products.

Before understanding smart sensors themselves, we need to understand exactly what a smart sensor is. Frank quotes the IEEE 1451.2 specification that defines a smart sensor as one ``that provides functions beyond those necessary for generating a correct representation of a sensed or controlled quantity. This function typically simplifies the integration of the transducer into applications in a networked environment''. The book leads the reader through the fundamental concepts of smart sensor systems and has a chapter dedicated to micromachining, which gives a broad overview of the main methods used in the production of many modern microsensors.

Wide coverage is given to examples of sensor systems utilizing microcontrollers and digital signal processing (DSP) techniques for auto-calibration, compensation and linearization techniques that make the sensor `smart'. There is also an up-to-date review of sensor communication techniques and standards. One area of microsensor technology that is often overlooked is that of sensor packaging and its effect on performance. Frank dedicates a chapter to this subject and describes how some of the potential problems can be avoided. For the non-technical reader there is a good glossary at the back of the book that covers most of the acronyms used within the discipline.

The final chapter of the book looks at the next phase of sensing systems and includes some ideas and proposals for the capabilities of the next generation of sensors. The author also reviews some of the alternative definitions of smart sensors. At this point he cites the overuse of the word in American vocabulary: Smart ScrubTM from Dow, Smart SolutionsSM from the United States Postal Service and Smart OnesTM from Weight Watchers. In the UK, of course, `smart' has a slightly pejorative sense when followed by certain other words; that's why we prefer the phrase `intelligent sensor'.

Neil White

1502

If you search the web for `measurement uncertainty', you will obtain a list of some 14500 items. This confirms, if necessary, the pervasiveness of this concept. Curiously, I am not aware of a scientific book specifically devoted to the evaluation of the measurement uncertainty. Therefore, this book certainly deserves careful consideration. If you go through it, you quickly realize that it is a multi-faceted book. It is an introduction to metrology, especially in chapter 1, due to T J Quinn, FRS, Director of the Bureau International des Poids et Mesures. It is an introductory course on uncertainty, largely but not exclusively based on the reference document, the Guide to the Expression of Uncertainty in Measurement (GUM), issued in 1995 by the seven leading organizations involved in measurement, such as the BIPM, the IUPAC, the IUPAP and so on. It is also a rich collection of examples and sometimes of curiosities (see Peelle's Pertinent Puzzle) taken from a wide spectrum of specialized applications. Furthermore, it contains an overview of one of the developments under consideration by the Joint Committee for Guides in Metrology (the body currently in charge of the GUM); namely, the treatment of the case of more than one measurand, which gives the author an occasion for an excursus on least squares and their application in metrology. From this viewpoint, the reader will find an answer to a very large number of possible questions concerning the routine uncertainty evaluation in, say, a calibration laboratory.

However, in my opinion the distinctive feature of the book is the Bayesian flavour that one can perceive here and there from the first few chapters and which really begins in chapter 6, Bayesian Inference, the longest of the book. As a matter of fact, this book is essentially a manifesto of Bayesian principles applied to measurement uncertainty, and as such the title could be somewhat misleading to the unprepared reader. The application of Bayesian techniques to measurement is largely due to German scientists, with whom the author has cooperated. This approach is attractive, in that it provides a natural way to combine fresh data from the current experiment with prior knowledge such as, for example, values coming from previous calibrations.

However, Bayesian techniques are far from being universally accepted within the metrologist community. If you search the web by crossing `measurement uncertainty' with `Bayes' you get only 350 items. There are essentially two reasons for this scarce acceptance. The first has to do with a supposed amount of subjectivity unavoidable in some cases in the assignment of the prior distribution, although the Bayesian theory can provide a sufficiently convincing motivation. The second reason is more practical: a strict application of Bayesian principles leads, even for comparatively simple cases, to complicated expressions which in most cases must be solved numerically. In addition, application to the case of n repeated measurements, which is readily dealt with by using the usual frequentist approach (which, incidentally, is severely criticized by Bayesians) leads to the condition that n>3 in order for the standard uncertainty to exist. Perhaps some comment on this seemingly unphysical consequence of the Bayesian approach would have been desirable.

Overall, however, despite a notation that I found sometimes heavy and pedantic, the book represents a good tool for anybody wishing to understand better the topic of measurement uncertainty. The (dominating) section on Bayesian inference is interesting and collects a series of results that were widely scattered in the literature. Also the references reflect to some extent the bias of the author towards Bayesian principles, but, provided that you are aware of this bias, I warmly recommend you to read and consider the contents of this book.

Walter Bich

1502

and

A V Srinivasan and D Michael McFarland have written a valuable book on design and analysis of smart structures. The book summarizes the operating principles of materials whose properties can be varied by changing an external parameter. The materials reviewed in this book are: Piezoelectric Materials, Shape Memory Alloys and Electrorheological and Magnetorheological Fluids. It also discusses fibre optics and biomimetics. In addition, it provides constitutive equations that describe the behaviour.

As the authors describe the materials, they also describe, with examples, how these properties can be used to design, build and analyse smart structures, i.e. structures that function as desired.

The book begins with a differentiation between material properties and smart structures and highlights the fact that `smartness' cannot be attributed to the material - it's the structures made from materials and embedded with controls that are smart.

The book describes the use of piezoelectric properties in the development of inchworm motors and actuation of structural components. It next discusses shape memory alloys, their mechanism of transformation and describes the constitutive equations that explain the shape memory effect. The experiments that enable design for vibration control of rods and structures are then discussed.

Next the composition, behaviour, models, applications (clutches and dampers) and effect of device geometry when using electrorheological and magnetorheological fluids are discussed. Complete chapters are dedicated to the design of vibration reduction devices and mistuned structures and control issues that are relevant to the subject.

The authors have also described the use of fibre optics, biomimetics, fibre-reinforced structures and their applications in the sensing arena.

The book addresses an important theme. Most of the examples reported in the book are based on the authors' past work. The scale of the structures described in the examples ranges from the micro domain to the macro domain.

The book does not flow very well, probably because the variables in equations are poorly defined and in many instances equations are introduced without much background. The units used in the book are not consistent, perhaps a reflection of the 30 year span from which the resources are drawn. The effect of size and scaling of structures is not discussed in detail.

The book is a useful source of constitutive equations for a practising engineer. In its current edition, it would be good reference book for a student who has studied these materials for at least a semester.

Shekhar Bhansali

1503

This is the fifth edition of this popular handbook. As was the case in previous editions the book is well presented over 500+ pages with nearly 250 figures, clear exposition of mathematical equations and high quality greyscale images.

The book is in four main parts. The first lays the foundations of the subject and gives a good introduction to the basic tools used in image processing such as random variables, probability density functions and a very sound section on image representation. The basics of digital filtering by convolution are also explained in this section of the book.

The second part of the book deals with image formation and preprocessing. As well as the more conventional material on image formation, e.g. detectors, basic optics etc, this section interestingly includes material on 3D imaging from structured light and tomography. The author is to be congratulated for including this important, but sadly often neglected, aspect of imaging.

Section three deals with feature extraction in a very competent, if rather conventional, fashion. All of the major techniques are covered including filtering, gradient methods, neighborhood processes etc. The inclusion of both motion extraction and texture analysis make this section more worthwhile.

The book's final part deals with the problems of image analysis. This includes interesting and useful treatments of inversion problems, morphological operations, shape descriptors etc.

In all there are some 20 chapters in the book. Each chapter concludes with a useful list of further reading. Overall the references given in the book are accessible and reasonably up to date.

The supporting CD offers the complete text of the book in electronic form and a copy of a general-purpose image processing package, which makes a useful tool.

An overall evaluation would be that this is an excellent and comprehensive review of digital image processing. While not an elementary text - the book requires a reasonably accomplished level of mathematical knowledge - it would be a excellent text for a final-year degree or Masters level module in the subject. It would also be a first-class introductory text for anybody new to research in this field. Finally the book would also be of value as a reference text for established workers in the field; its comprehensive nature and high quality references mean that it would be an excellent starting point for information on areas outside one's immediate specialty.

David Burton

1503

and

There is a growing need in today's world to support the study of industrial problems with an appropriate analysis anchored in the underlying basic physical principles that are involved and their description in mathematical terms. With the increasing availability of ever-improving computational facilities, it is all the more important to ensure that a balanced understanding of the problem is based on sound physical principles that concentrate on effects of leading importance and to ensure that secondary effects are appropriately considered. This process requires a proper consideration of mathematical techniques that often concentrate on partial differential equations to formulate the equations of motion and to achieve their solution. This involves an appreciation of a wide range of techniques, both analytical as well as numerical, ranging from underlying symmetries to asymptotic techniques. The purpose of this book is to address a selection of a wide-ranging set of typical industrial problems highlighting just these very features in a balanced way.

The book achieves these objectives in a clear way, ensuring a high quality of presentation as well as content. The wide-ranging scope of the problems considered does not sacrifice the depth to which they are treated. In summary, it achieves its objectives in a very useful, clear and satisfactory way and constitutes a very good example of how practical problems of industry can be helped by mathematical modelling.

The material is very well laid out in a clear and accessible way with maximum simplicity in mind. Appropriate diagrams and a very suitable set of references to support further study of the problem are also included.

The book meets the high standards of production characteristic of Cambridge University Press. Its readership should comprise third-year undergraduate students as well as postgraduate MSc and PhD students, extending to include all those in industry whose interests lie in the province of practical mathematical modelling with applications in mind.

The book is available in hardback as well as in economical paperback copies that could be of particular benefit to students.

Phiroze Kapadia