Table of contents

Volume 13

Number 2, February 2002

Previous issue Next issue

PAPERS

141

, , , and

A new supersonic molecular beam line has been attached to an existing UHV apparatus. Three different nozzles mounted on a rotatable manipulator allow for independent gas feeds. In this way, dosing sequences with different reactive gases can be carried out within a few minutes. Due to the compact design of the beam line, the apparatus does not forfeit its original flexibility and mobility. Apart from standard techniques like thermal desorption spectroscopy, low-energy electron diffraction, reflection absorption infrared spectroscopy and x-ray photoelectron/Auger electron spectroscopy, a quadrupole mass spectrometer mounted on a linear drive and viewports allows for photochemical experiments or other laser applications.

150

and

Diagnostic tools for real time and direct gas analysis have been developed. The simultaneous measurements of gas and particle temperatures (280-330 °C) and gas concentrations (CO, CO2, HCl, H2O) are demonstrated in a hot particle-laden flue gas with a fibre-optic probe connected to a Fourier transform infrared spectrometer. The gas temperature is found from the thermal radiation at the 2350 cm-1 CO2 fundamental band, whereas the gas concentrations are determined by comparing the measured transmittance spectra with a spectroscopic database and validation measurements using the Hotgas facility at Risø. Measurement uncertainties are discussed. The measured local gas temperatures and concentrations are in good agreement with measurements made with conventional equipment.

157

, and

A feasibility study was conducted on the application of magnetic flux leakage (MFL) inspection to the evaluation of weld quality in automotive tailor-welded blanks (TWB). Using a permanent magnet configuration, magnetic flux was directed through the weld region of a TWB. A Hall effect sensor was coupled to the movement of a digital plotter and was, thereby, scanned around the weld region. Signals from the Hall effect sensor were processed and correlated with defects to determine corresponding MFL signatures. Simulated through-hole defects as small as 0.34 mm in diameter were readily detected. Furthermore, there was a reasonably linear relationship between the MFL signals associated with these defects and the diameter of the defect hole. Preliminary tests with specimens having naturally occurring defects such as concavity, pinholes, and undercutting, indicate that the MFL technique has excellent potential as an inspection method in this application.

163

, and

In this paper we describe a new approach to the detection of human body electrical activity which has been made possible by recent advances in ultra-low-noise, ultra-high-input-impedance probes. As we demonstrate, these probes, which do not require a real current conducting path in order to operate, can be used non-invasively both on and off body. We present remarkable new data showing the application of these probes to the remote, off-body, sensing of the electrical activity of the heart at distances of up to 1 m from the body and to high-resolution electrocardiograms. We suggest that in the future such probes may form the basis of a radically new technology for measuring the dynamics of the human body as well as in non-contact, imaging systems for pre-emptive and diagnostic medicine.

170

, , and

We have carried out a complete analysis of the recently proposed thermoelectric SQUID method for non-destructive material characterization. The magnetic fields generated by thermoelectric currents in conducting specimens with different material flaws were calculated using a finite element computation technique. It is demonstrated that simulated first and second derivatives of the normal component of the magnetic field can be obtained for arbitrary shapes of the defects and compared with experimental signals usually measured by axial gradiometers.

174

, , and

A multi-purpose automated dilatometer has been developed for simultaneous measurements of the expansion/contraction under the effects of magnetic field and/or mechanical stress and/or temperature. The differential capacitive position sensor, operating together with the microprocessor controlled digital transformer bridge, is used as a displacement transducer with the resolution of several tens of nanometres. Measurements are accomplished in the temperature range from -150 to 200 °C. The automatically controlled variation of the applied magnetic field is provided by the electromagnet with the field homogeneity of 0.5×10-5 of the magnetic field strength (maximum 1.1 T). A special controlling system is developed for the automated mechanical loading of the sample under investigation. Some examples of the measurements completed on the magnetic shape memory alloy Ni2MnGa are presented for three cases: (i) strain as a function of the applied magnetic field; (ii) creep under constant magnetic field or mechanical stress; and (iii) phase transformations during heating/cooling with and without the applied magnetic field.

179

, and

A novel, compact, robust and highly versatile polarization-modulated electro-optical instrument for measuring material properties, fluid flow parameters, stress, strain and molecular structure of optically anisotropic materials has been developed in this paper. The new instrumentation uses two polarized laser beams. Each beam is linearly polarized with the two polarization states orthogonal to each other. The laser beams are sinusoidally intensity modulated with 180° phase difference by two laser drivers and a signal inverter connected to the output of one of the laser driver circuits. The anti-phase intensity modulation of each orthogonal polarization increases the instrument's sensitivity through the use of heterodyning signal analysis techniques with a single lock-in amplifier (LIA). When the two semiconductor laser beams are optically combined, the result produces a laser beam with a constant optical power level comprised of time-varying power levels in each orthogonal polarization state. The polarization state of the laser light is modulated without the use of a traditional modulator. The instrument photodetector produces a direct-current signal along with a periodic signal at the modulation frequency that is recovered by a LIA tuned to the modulation frequency. By combining these signals in the appropriate relationship, a material's phase retardance or average molecular orientation angle may be measured. The main advantages of this technique over existing methods are lower cost due to the lack of an optical modulator, small size when compared to a photoelastically modulated system and improved sensitivity over continuous-wave laser crossed-polarizer instruments.

186

, , , and

Absolute measurement of detector quantum efficiency using optical parametric down-conversion has been extensively studied for the case of a continuous wave pump. In this paper, we have used the temporally and spatially correlated properties of the down-converted photon pairs generated in a nonlinear crystal pumped by a femtosecond laser pulse to perform an absolute measurement of detector quantum efficiency. The measured detector quantum efficiency is in excellent agreement with the measured value in the conventional way. A lens with a long focal length was adopted for efficiently increasing the intensity of the down-conversion entangled photon source.

190

, and

In this study, a variable-resolution optical measurement system (VROPMS) based on triangulation measurement technology is proposed. The VROPMS optical scanning probe is composed of dual CCD cameras fitted with zoom lenses and a line laser diode projector. A flexible and novel calibration procedure for VROPMS is developed to acquire the system parameters quickly and accurately. The central position of the reflected laser image is calculated using Gaussian function least-squares fitting of the beam intensity. Subpixel resolution can thus be acquired. Experimental calibration results show that the higher the lens magnification, the finer the derived system resolution. The best accuracy at the zoomed focus position is about 0.02 mm. This system can flexibly zoom in or out to measure a 3D object profile in sections according to the approximate surface profile. Varied mesh images taken from different zoom positions by VROPMS can be patched using the image matching technique to reconstruct the entire profile. A human sculpture with a complex surface profile is measured using VROPMS as a practical illustration of the effectiveness of the system.

198

and

Magnetic resonance imaging (MRI) magnets have very stringent constraints on the homogeneity of the static magnetic field that they generate over desired imaging regions. The magnet system also preferably generates very little stray field external to its structure, so that ease of siting and safety are assured. This work concentrates on deriving means of rapidly computing the effect of `cold' and `warm' ferromagnetic material in or around the superconducting magnet system, so as to facilitate the automated design of hybrid material MR magnets.

A complete scheme for the direct calculation of the spherical harmonics of the magnetic field generated by a circular ring of ferromagnetic material is derived under the conditions of arbitrary external magnetizing fields. The magnetic field produced by the superconducting coils in the system is computed using previously developed methods. The final, hybrid algorithm is fast enough for use in large-scale optimization methods. The resultant fields from a practical example of a 4 T, clinical MRI magnet containing both superconducting coils and magnetic material are presented.

206

A complete scheme for an effective digital signal processing method for a dual-sensor resistivity probe in two-phase bubbly flow measurement was developed and verified. This method can be classified as phase discrimination, miscounted bubble identification and statistical analysis. The method has the advantage of incorporating the real physics of bubble-probe and liquid-probe interactions compared to previous methods. This paper also provides a method to identify the practically miscounted bubbles in the possible categories of signal pattern and to screen them from a diagnosis tree. The validity of this signal processing method was confirmed by the agreement of labelled sampling data and a comparison of integrated local values with known global references.

218

, , and

Accuracy in Q-factor measurements for low-loss mechanical systems is limited by noise in long-term amplitude ring-down curves. In particular, seismic noise can couple strongly to a pendulum. By correlating the appropriate component of seismic data with a ring-down curve, we show that it is possible to remove the variation in amplitude due to seismic excitation. Removal of the seismic signal allows a faster and more accurate determination of pendulum Q-factors. In addition, the technique could allow the Q-factor to be determined without large-amplitude excitation.

222

, , , and

The accuracy of a heterodyne interferometer is often limited by the periodic nonlinearity that arises mainly from imperfect separation of the two optical frequencies. This paper describes a new method for compensation of the nonlinearity in the heterodyne laser interferometer using the quadrature mixing technique for the phase measurement. The compensation technique is based on the elliptical fitting of two phase-quadrature signals obtained by a lock-in amplifier. The brief analysis and compensation scheme of the nonlinearity in the heterodyne interferometer, and the experimental result using a Zeeman stabilized He-Ne laser, have been presented. The results show that the suggested method can compensate for the nonlinearity of the heterodyne interferometer with sub-nanometre accuracy.

226

, , , and

This paper describes the use of an assembly of electro-optic components and a high-performance capacitive voltage divider in a novel voltage-measurement probe for use in short-duration 500 kV pulsed-power situations, where conventional probes are unsuitable. Complete electrical isolation is provided between the high-voltage circuit and the recording oscilloscope. Experimental results confirm a predicted rise-time of less than about 3 ns, which is significantly better than that of high-quality 90 MHz bandwidth commercially available probes for the same level of voltage.

DESIGN NOTES

001

From 2002 the Editorial Board for Measurement Science and Technology have decided to introduce an annual award for Best Design Note in addition to the established annual Best Paper Award.

Nominations are now requested for Best Design Note. If you read a design note published in MST in 2002 which meets these criteria, please send your nominations to mst@iop.org giving the reasons for your choice.

The intent of the Design Notes portion of MST is to provide useful information to the experimentalist community. The intent of the Best Design Note award is to acknowledge the best contribution to this intended purpose. It is, therefore, appropriate to provide the opportunity to those who utilize the content of the Design Notes to play a significant role in the selection of the awardee. Hence, those who have utilized the content of the Design Note and wish to nominate it for the award are welcome to do so with an accompanying statement of its contribution to their research program. These communications will play a substantial role in selecting the awardee.

N21

, , , and

Surface topography plays a significant role in functional performance situations like friction, lubrication and wear. A European Community funded research programme on areal characterization of steel sheet has recently assisted research in this area. This article is dedicated to the software that supported most of the programme. Born as a rudimentary collection of procedures, it grew steadily to become an integrated package, later equipped with a graphical interface and circulated to the research community employing the Open-Source philosophy.

N27

, , , and

A novel device for sampling an unstable liquid-in-liquid or gas-in-liquid dispersed system has been developed. Samples of a dispersed system can be easily collected from any location in an operating stirred tank reactor. A sampled specimen is fixed instantaneously, enabling easy observation under a microscope and direct measurement of dispersion characteristics, such as size and/or distribution of the dispersed phase and hold-up through the use of an image analyser. The specimen can be stored for a sufficiently long time, with the dispersion maintained completely stable until measurement.

BOOK REVIEWS

229

If you've not been involved in MEMS (MicroElectroMechanical Systems) technology or had the cause to use MEMS devices, then you may wonder what all the fuss is about. What are MEMS anyway? What's the difference between MEMS and MST (MicroSystems Technology)? What are the advantages over existing technologies? If you have ever found yourself pondering over such questions, then this book may be for you.

As the title suggests, the main aim is to provide an introduction to MEMS by describing the processes and materials available and by using examples of commercially available devices. The intended readership are those technical managers, engineers, scientists and graduate students who are keen to learn about MEMS but have little or no experience of the technology. I was particularly pleased to note that Maluf has dedicated a whole chapter to the important (and often difficult) area of packaging.

The first three chapters provide a general overview of the technology. Within the first three pages we are introduced to the MEMS versus MST question, only to discover that the difference depends on where you live! The United States prefer MEMS, while the Europeans use the handle MST. (Note to self: tell colleagues in MEMS group at Southampton). A good account is given of the basic materials used in the technology, including silicon, silicon oxide/nitride/carbide, metals, polymers, quartz and gallium arsenide. The various processes involved in the creation of MEMS devices are also described. A good treatment is given to etching and bonding in addition to the various deposition techniques. It was interesting to note that the author doesn't make a big issue of the differences between bulk and surface micromachined devices; the approach seems to be `here's your toolbag - get on with it'.

One of the great strengths of this book is the coverage of commercial MEMS structures. Arising as they have, from essentially a planar technology, MEMS devices are often elaborate three-dimensional creations, and 2D drawings don't do them much justice. I have to say that I was extremely impressed with the many aesthetic isometric views of some of these wonderful structures. Pressure sensors, inkjet print nozzles, mass flow sensors, accelerometers, valves and micromirrors are all given sufficient treatment to describe the fundamental behaviour and design philosophy, but without the mathematical rigour expected for a traditional journal paper.

Chapter 5 addresses the promise of the technology as a means of enabling a new range of applications. The concept of using MEMS devices as key elements within complex systems (or even microsystems!) is explored. The so-called `lab-on-a-chip' approach is described, whereby complex analytical systems are integrated onto a single chip together with the associated micropumps and microvalves.

The design and fabrication of MEMS devices are important issues by themselves. A key area, often overlooked, is that of packaging. Painstaking modelling and intricate fabrication methodologies can produce resonator structures oscillating at precisely, say, 125 kHz. The device is then mounted in a dual-in-line carrier and the frequency shifts by 10 kHz because of the additional internal stresses produced. Packaging issues can't be decoupled from those of the micromachined components. Many of these issues, such as protective coatings, thermal management, calibration etc, are covered briefly in the final chapter.

Overall, I found this book informative and interesting. It has a broad appeal and gives a good insight into this fascinating and exciting subject area.

Neil White

229

Many students and researchers in the field of fluid mechanics or heat and mass transfer may already be familiar with the first edition of this book from 1994 under the editorship of Franz Mayinger. Keeping with its proven format, this new edition comprises the completely revised first edition and includes new chapters on the phase Doppler technique and on particle image velocimetry. Furthermore, a CD-ROM has been included, demonstrating the application of many of the techniques in the form of high-speed movies. These are all welcome additions to the book, which now covers the most commonplace techniques based on elastic and inelastic light scattering.

The actual contents of the book are described well by the 16 chapter titles: Introduction, The Schlieren Technique, Fundamentals of Holography and Interferometry, Holographic Interferometry, Short Time Holography, Evaluation of Holograms by Digital Signal Processing, Light Scattering, Laser-Doppler Velocimetry, Phase Doppler Anemometry, Dynamic Light Scattering, Raman Scattering, Laser Induced Fluorescence, Absorption, Pyrometry and Thermography, Tomography, and Particle Image Velocimetry. The level at which each subject is covered varies somewhat, with a clear preference for holographic-based techniques. Several chapters on fundamentals, e.g. light scattering or absorption, greatly improve the readability of the book while maintaining detail where necessary. The rather large list of references will be very helpful for those readers requiring further information. A very distinctive feature of this book is its liberal use of application examples, not just on the CD-ROM. A large majority of these examples come from the fields of heat transfer or combustion, areas for which Professor Mayringer's institute in Munich is well known. The book should be of great interest for graduate students, engineers and researchers involved in these fields and in the general field of fluid mechanics. It is perhaps less appropriate as a textbook accompanying a lecture course, at least without supplemental background material.

A total of 22 authors have contributed to the content. Unlike many similar attempts at collectively writing a book, a conscious effort has been made to include cross-references between chapters. The difficulty of inconsistent nomenclature has been solved by including a table of nomenclature for each individual chapter. Similarly, the citations, although numbered consecutively, are divided into chapter-related alphabetical lists. Indeed, the book makes a surprisingly homogeneous impression, despite the large author list. Perhaps the most distracting point is the rather poor English and grammar and it is disappointing that the publisher did not value this aspect of the production more highly.

In summary, it is hard to imagine a laboratory in fluid mechanics, heat and mass transfer, combustion engineering or process engineering that would not benefit from having this book available in its library.

Cam Tropea

230

This book presents an introduction to the theory of electronic transport in the solid state. The author starts with a reminder of the quantum theory of condensed matter and then concentrates on carrier transport itself. A wide range of subtopics is touched upon in different chapters, e.g. transport in bulk samples and in small devices, mesoscopic phenomena, to name a few.

The author avoids using heavy mathematical machinery and, therefore, does not give a strict theoretical description. However, the presentation is clear and, as a rule, good examples are given. The author gives a brief review of the basic theoretical methods and models, like the Boltzmann equation, the balance equations, the scattering matrix approach etc. An application of numerical methods is demonstrated as well.

The book can be used as a good textbook by undergraduate and postgraduate students specializing in applied physics and related areas. A well structured presentation also allows the book to be used as a simple handbook on the theory of transport in the solid state.

This second edition of the book is considerably enlarged and revised compared with the first one. However, if a third edition is to be considered, the author must improve it further. For example, the theory of transport in media with a non-parabolic conduction band and the transport properties of low-dimensional structures should be presented in much more detail. A lot of mesoscopic effects are missing from the last chapter, for instance the weak localization correction to the classical conductivity. I believe also that a brief introduction to the methods of the quantum kinetic equation and Green's functions is necessary in any modern book on transport. In its present form, I would not recommend the book for republication.

Oleg Yevtushenko

230

The object of this book is to explore how and why the brightness and colour of light, which seem to be purely subjective quantities of general use, have been transformed into measurable quantities. To find the answers to these questions the author considers mainly the history, sociology and philosophy of science. Instead of following a classical chronological style of presentation Sean Johnston prefers to describe, over rather long periods of time, the main factors and actors that have influenced the development of light and colour measurement. This method applies very well to this unusual and complicated subject to describe the different levels and nodes of the network of this field of measurement.

After a clear introduction presenting the problem of light and colour measurement, the book starts with an evocation of the motivations and results of the pioneers of light measurement during the 17th and 18th centuries, followed by a presentation of the techniques available at the beginning in photometry, radiometry and colorimetry. The rapid and important development of photometry during the 19th century due to the pressure of the gas and later electric lighting industries is then analysed, leading to the conclusion that the eye of the observer used for the measurement is a very poor metrological apparatus. For the same period of time the role and the place of researchers and engineers in society is also studied. Then, for the beginning of the 20th century, the leading role of newly created metrological institutes in realizing and offering standards to industry for light and colour measurement is emphasized. The very difficult task of replacing, during the period between the two world wars, visual photometry by physical photometry to take advantage of the newly developed photoelectric detectors is also analysed, showing that in spite of the difficulties it was possible to offer convenient instruments on the market for light and colour measurement. During the same period of time, the decisive role of the Commission International de l'Eclairage (CIE) in reaching an agreement on a coherent colorimetric system among groups of people with diverse interests is also explained. The development, from the end of the second world war up to the beginning of the 1970s, of radiometry mainly for military applications is then analysed. Considering the unusual development and evolution of the measurement of light and colour compared with most of the other sciences, the author, in the last chapter of the book, suggests defining photometry, radiometry and colorimetry as `peripheral sciences' having well specified properties.

The book is easy and very pleasant to read. It provides not only a lot of interesting information on the various techniques used in the past for light and colour measurement but, first and foremost, clear explanations of the causes and the paths of the development and evolution of this specific field of metrology. Many people, organizations and events that have had major influences on the history of photometry, radiometry and colorimetry are presented. Even if it is not possible to be exhaustive and to extend the covered period of time indefinitely, it would probably have been interesting to have few words on the role of the Consultative Committee of Photometry and to extend the covered period to 1979 in order to include the present definition of the unit of luminous intensity.

The terminology used avoids overly specific terms known only by specialists but nevertheless remains perfectly correct even if, now and then, there is some small loss of accuracy. So, a very large number of people could read and understand this book even if they are not expert in light and colour measurement. This book can be recommended not only to the specialist in the field of light and colour measurement who wishes to understand the present situation in this specific field of metrology and trace its historical development, but also to everybody interested in metrology in general because most of the explanations given for photometry, radiometry and colorimetry can be applied to other fields of metrology.

Jean Bastie

230

This book provides an in-depth and comprehensive survey of the current state-of-the-art of the international atomic timescale and its underpinning relationship to practical and legal timescales. The authors are two of the leading figures in French time metrology. Claude Audoin, formerly Director of the Laboratoire de l'Horloge Atomique at Orsay, has had a long and distinguished career in the development of atomic frequency standards and atomic clocks. Bernard Guinot, formerly Director of the Bureau International de l'Heure (now absorbed by the Bureau International des Poids et Mesures, BIPM), has a lifetime's experience in the development of timescales. The work reviewed here is a revised and updated translation of Les fondaments de la mesure du temps (Paris: Masson, 1998). It is written at a level appropriate for an experienced graduate student or postdoctoral scientist entering either the field of time and frequency metrology or that of precise astronomical observation, whilst having sufficient depth to be useful as a work of reference for the established researcher in these fields. The first third of the book discusses the physical foundations of time metrology and introduces key concepts in the use of physical oscillators as clocks. The historical account of the complex process by which mean solar time came to be replaced by Coordinated Universal Time (UTC) will be particularly informative for newcomers to the field.

About half of the volume is devoted to atomic frequency standards, with an exposition of the physics of caesium beam clocks and caesium fountains and a review of other commonly used standards such as the hydrogen maser, rubidium cell and stored ion clock. Although thorough, this 127-page chapter does not replace Audoin's earlier work with Jacques Vanier, The Quantum Physics of Atomic Frequency Standards (Bristol: IOP Publishing, 1989, out of print), which will remain the Bible for physicists working on clock development. The present time is one of great excitement in the development of atomic frequency standards. At the time the French edition was published, the first caesium fountain frequency standard, at the BNM-LPTF in Paris, had recently demonstrated 1.5 × 10-15 relative frequency uncertainty. Two such standards, at PTB and NIST, are now contributing to International Atomic Time (TAI). Clocks based on optical transitions in ions or atoms are just beginning to become a reality, thanks to Ted Haensch's breakthrough with the femtosecond optical frequency comb generator, which arrived on the scene just in time to receive a mention in this English edition. In a rapidly evolving field, any general survey is bound to be overtaken by events.

Primary caesium frequency standards are designed to realize the SI definition of the second. The relationships between the SI second, TAI and UTC, as well as the national approximations to UTC generated by national standards laboratories such as NPL, the timescale available from GPS and timescales used in astronomy, are carefully set out. Finally, the practical applications of these timescales and frequency standards are reviewed, ranging from ultraprecise determinations of fundamental physical constants to positioning and navigation. The problem of longitude is as pressing as in Harrison's day, but for the foreseeable future, clockmakers will continue to have the upper hand over astronomers!

Despite betraying the occasional Gallicism, the English translation reads with the clarity of the French original. The reader is guided painlessly through the multiplicity of acronyms with which the topic abounds. Audoin and Guinot have produced a book that will stand the test of time.

Stephen N Lea

231

This book is based on presentations given at the 2nd EURACHEM Workshop on Current Issues in Teaching Quality in Chemical Measurements, held at GKSS (27-29 September 1998). Fifteen presentations are included, all of which are centred on overhead transparencies which are printed in small size (some are almost illegible) in the book. The printed slides are intended mainly for identification since all (over 300) are contained on a CD-ROM, which accompanies the book. Thus any slides may be downloaded and printed for use in training or teaching in the field of quality assurance in chemical measurement. The content of the slides varies from a few lines of text to quite involved figures, tables and flow diagrams. In my experience, publishing information in this format is rare, but made possible by the continuing advances in information technology. Whether it is successful remains to be seen. To download the slides, PowerPoint 97 or PowerPoint Viewer 97 is needed together with a PC operating Windows 95 or later. The viewer can be downloaded from the Microsoft homepages as shareware.

The presentations are divided into four sections. The first section contains three lectures, which cover analytical quality management and quality assurance in industry, academia and research projects. The second section contains five worked examples of teaching analytical quality concepts. This is followed by four descriptive experiments designed to demonstrate the principles of quality assurance. The final section covers course structures, contents and experiences. The stated aim of the book is to provide a reference text and source of course materials for lecturers and also to act as an advanced textbook for analytical students and professional analysts. The book has certainly achieved its first aim and should be genuinely useful to teachers of analytical chemistry at honours degree level and for some postgraduate as well as in-house training courses. Whether it will prove useful as an advanced textbook is more doubtful since the way in which the material is presented does not make it very readable. Certainly, I would not recommend the book for this purpose unless it was linked specifically to a particular lecture or training course.

Generally, production of the book and accompanying CD-ROM is of a high standard.

R T Bailey

231

One of my greatest hates is the edited book with chapters from a number of different authors, some of which have evidently been written in haste against the final deadline, and most of which show marked divergence of content, opinion, notation and presentation. I am very relieved to be able to say that, though edited, this volume does not fall into this stereotype, in spite - or perhaps partly because - of the relatively few authors who have contributed to it. Indeed, it falls into the category of a profound study of one major, topical aspect of machine (or computer) vision, and provides suitable contrasts which display the complexities of this subject area, while at the same time bringing the reader to a clear understanding of the problems, the principles and the ways in which solutions can be found. With seven main chapters (the first `chapter' is only a five-page overview of the others, and there is no concluding chapter), it is surprising how much is packed into the volume and how skilfully the overlapping themes have been brought together - illustrating in fact that these themes are core to the subject and at the same time showing from different perspectives how they can be handled. Perhaps most important amongst these recurring themes are the roles of robust algorithms and robust statistics, the problems of ambiguity and (for example) `dangerous surfaces', the problems of focusing (which severely affects camera calibration), the importance of nonlinear distortions (which in high accuracy work absolutely have to be corrected for) and, naturally, the measurement of camera parameters (which are, as is now standard, divided into `internal' and `external' parameters). This skilful compilation is particularly welcome considering that in the Preface the Editor states `This is not a textbook. Therefore, it is neither consistent in diction nor content.'

Having succeeded on the coherence front, the book could fall into another serious trap: it is stated to be the result of a workshop on the same topic as the title of the book, held as long ago as July 1992, having `for various reasons' only reached publication in the current form almost a decade later. In a fast moving subject such as 3D vision this could be a serious problem, and make the volume totally irrelevant. However, I did not find it thus. Clearly, the authors have not been idly twiddling their thumbs all this time: they have managed to update the chapters sufficiently to be of high relevance to the present. All the authors are international figures, and interestingly, the authors of the last chapter have recently written a major textbook on the geometry of multiple images, whose contents are well reflected in their chapter.

Of great importance to this volume is the fact that the thinking comes from two directions, photogrammetry and computer vision, there being four chapters on the former and three on the latter topic. Over time, these two subjects have become so large that it is difficult to know the whole of either, let alone the whole of both, as they have grown up with different motivations in different backgrounds. Yet it has become imperative to put the two together, so as not to have to re-invent the wheel many times in different contexts (I hardly think the `Not Invented Here' syndrome has been important in this respect). This volume, and no doubt the workshop on which it was based, has managed to do this, and has enlivened and enriched the whole area. Incidentally, I ought to summarize the difference between the two constituent disciplines: computer vision is aimed at studying the real (largely 3D) world from the medium of images, which it takes as incidental; while photogrammetry regards the images themselves as the core representation and aims to make this representation as accurate as possible (the aim, par excellence, of photogrammetry is automatic production of maps from remotely sensed images).

So far I may have given the impression that this volume is exemplary in all aspects. However, it will be hard work for readers who are not au fait with 3D vision. For one thing, the mathematics of 3D vision is by no means trivial; what is more, there are vitally important theorems stretching back over many years (cf a key paper by H C Longuet-Higgins on `A computer algorithm for reconstructing a scene from two projections' Nature (1981), not to mention the still highly relevant Kruppa relations which date from as long ago as 1913). Then there are the pitfalls - in particular ambiguity, which makes itself manifest in many forms - starting with the minimum number of points required for 3D recognition and orientation, which depends strongly on whether the perspective is full or weak and whether the points are coplanar or non-coplanar, and going on to more complex `dangerous surfaces' - thereby broadening the subject from the possible or impossible to interpret category to the stable or unstable measurement situation, to which the solutions offered by robust statistics are intimately linked. It goes without saying that it is not the fault of the authors that the subject area is complex and that the tools required to tackle it in anger are not trivial. However, the result is that the reader will need a basic knowledge of 3D vision before studying this volume, and at the very least he or she should have read a basic textbook on the subject (there are now a number of such texts), and should, in addition, have a substantial pile of original papers for reference. Having all this to hand, the reader will learn a great deal from this volume: its xi + 235 pages are packed with concentrated, interesting material that is well written and constructed, and contain vitally important lessons for the student and the practitioner. The browser will also be able to learn something. The sections on robust estimation, image distortions, problems caused by focusing, ambiguity and so forth, are didactic, and the reader will be much aided by the valuable unifying index.

Finally, I ought to list the various chapters, so that readers will know what they have in store for them. Without exception, they are all worthy of serious study and none is a `space filler':

  • A Gruen: `Introduction' (5 pp, no references)

  • B P Wrobel: `Minimum solutions for orientation' (56 pp, 121 references)

  • W Förstner: `Generic estimation procedures for orientation with minimum and redundant information' (32 pp, 20 references)

  • C S Fraser: `Photogrammetric camera component calibration: a review of analytical techniques' (27 pp, 24 references)

  • D B Gennery: `Least-squares camera calibration including lens distortion and automatic editing of calibration points' (14 pp, 7 references)

  • R G Willson and S A Shafer: `Modelling and calibration of variable-parameter camera systems' (25 pp, 1 reference)

  • A Gruen and H A Beyer: `System calibration through self-calibration' (31 pp, 13 references)

  • Q-T Luong and O D Faugeras: `Self-calibration of a stereo rig from unknown camera motions and point correspondences' (35 pp, 50 references).

Clearly, the 236 references are rather unevenly distributed between the chapters. However, collectively they are highly useful and a fair proportion do fall in the hidden decade 1992-2001 (I counted 62 that had been published over this period, though only 16 from 1995-2001).

Overall, I have no hesitation in recommending this book to those who will be working in this area. It is a highly relevant and topical work which also contains valuable reference material for the non-specialist: it also does an excellent job of reuniting two subject areas that had gone their own ways over a period of some 30 years.

E R Davies

232

This revised edition of a book which forms a part of the International Union of Crystallography's series of Texts on Crystallography is some 30% larger than its predecessor, amongst other reasons because of increased coverage of diffraction and the inclusion of a separate chapter on the stereographic projection. Overall Dr Hammond covers the topics one would expect from the title. It is particularly pleasing to see considerable emphasis placed on symmetry, a topic that is fundamental to crystallography but is all too often played down these days in elementary courses because of the demands it makes on students' ability to appreciate matters that are underpinned by rigorous mathematical arguments, even when, as here, the formalities are replaced by persuasive text. Indeed the level of the mathematics in the book has deliberately been kept simple. The expanded chapters on diffraction cover the basic theoretical background for x-ray and electron diffraction, with a brief reference to neutron diffraction, illustrated by reference to a range of experimental methods. Perhaps the selection might have given greater emphasis to counter-diffractometry and computer-based data analysis rather than to photographic methods but the latter are arguably better at illustrating important geometrical aspects of data collection. Each chapter is rounded off with a helpful set of exercises, for which detailed answers are provided at the end of the book.

A separate strand that runs throughout the book informs the reader about the history of the subject from its early days to the present, culminating in appendix 3, which presents brief biographies of almost 50 figures influential in the development of crystallography and diffraction, many of whom are remembered daily, without appreciation of their human side, because their names are attached to important parts of the theoretical or practical `apparatus' of the subject. Other appendices contain helpful information about sources of components for crystal model-building and about useful software as well as information on a number of geometrical and mathematical topics.

Finally, one name in particular leapt out from the author's acknowledgments: the reviewer too acknowledges with gratitude the guidance of Dr Norman Henry many years ago!

In short, this book is an excellent attempt to present in an interesting yet simple way a subject that many students find daunting. It well deserves to succeed.

John A Leake