Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Liisa M Hirvonen and Klaus Suhling 2017 Meas. Sci. Technol. 28 012003
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol. 35 065601
In this study, a self-sensing and self-heating natural fibre-reinforced cementitious composite for the shotcrete technique was developed using Kenaf fibres. For this purpose, a series of Kenaf fibre concentrations were subjected to initial chemical treatment, followed by integration into the cement-based composite containing hybrid carbon nanotubes (CNT) and graphene nanoplatelets (GNP). The investigation encompassed an examination of mechanical, microstructural, sensing, and joule heating performances of the environmentally friendly shotcrete mixture, with subsequent comparisons drawn against a counterpart blend featuring a conventionally synthesized polypropylene (PP) fibre. Following the experimental phase, a comprehensive 3D nonlinear finite difference (3D NLFD) model of an urban twin road tunnel, completed with all relevant components, was meticulously formulated using the FLAC3D (fast lagrangian analysis of continua in 3 dimensions) code. This model was subjected to rigorous validation procedures. The performances of this green shotcrete mixture as the lining of the inner shell of the tunnel were assessed comparatively using this 3D numerical model under static and dynamic loading. The twin tunnel was subjected to a harmonic seismic load as a dynamic load with a duration of 15 s. The laboratory findings showed a reduction in the composite sensing and heating potentials in both cases of Kenaf and PP fibre reinforcement. Incorporating a specific quantity of fibre yields a substantial enhancement in both the mechanical characteristics and microstructural attributes of the composite. An analysis of digital image correlation demonstrated that Kenaf fibres were highly effective in controlling cracks in cement-based composites. Furthermore, based on the static and dynamic 3DNLFD analysis, this green cement-based composite demonstrated its potential for shotcrete applications as the lining of the inner shell of the tunnel. This study opens an appropriate perspective on the extensive and competent contribution of natural fibres for multifunctional sustainable, reliable and affordable cement-based composite developments for today's world.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Thomas Engel 2023 Meas. Sci. Technol. 34 032002
The field of optical 3D metrology is gaining significant interest in the past years. Optical sensors can probe the geometry of workpieces and biological samples very fast, highly accurate and without any tactile physical contact to the object's surface. In this respect, optical sensors are a pre-requisite for many applications in the big trends like Industrial Internet of Things, Industry 4.0 or Medicine 4.0. The interest for optical 3D metrology is shifting from a metrology for quality assurance in industrial production to "digitize the real world" to facilitate a precise digital representation of an object or an environment for documentation or as input data for virtual applications like digital fab or augmented reality. The aspiration to digitize the world necessitates fast and efficient contact free sensing principles of appropriate accuracy for solid and even soft objects with a variety of colour, surface texture and lighting conditions. This review article tries to give a concise conceptual overview about the evolution of a broad variety of optical measurement principles that evolved and gained some importance in the field of 3D metrology for industrial 3D applications and their related technological enablers.
Gustavo Quino et al 2021 Meas. Sci. Technol. 32 015203
Digital image correlation (DIC) is a widely used technique in experimental mechanics for full field measurement of displacements and strains. The subset matching based DIC requires surfaces containing a random pattern. Even though there are several techniques to create random speckle patterns, their applicability is still limited. For instance, traditional methods such as airbrush painting are not suitable in the following challenging scenarios: (i) when time available to produce the speckle pattern is limited and (ii) when dynamic loading conditions trigger peeling of the pattern. The development and application of some novel techniques to address these situations is presented in this paper. The developed techniques make use of commercially available materials such as temporary tattoo paper, adhesives and stamp kits. The presented techniques are shown to be quick, repeatable, consistent and stable even under impact loads and large deformations. Additionally, they offer the possibility to optimise and customise the speckle pattern. The speckling techniques presented in the paper are also versatile and can be quickly applied in a variety of materials.
Open all abstracts, in this tab
Bartosz Pruchnik et al 2024 Meas. Sci. Technol. 35 085901
Scanning probe microscopy (SPM) is a broad family of diagnostic methods. Common restraint of SPM is only surficial interaction with specimen, especially troublesome in case of complex volumetric systems, e.g. microbial or microelectronic. Scanning thermal microscopy (SThM) overcomes that constraint, since thermal information is collected from broader space. We present transformer bridge-based setup for resistive nanoprobe-based microscopy. With low-frequency (LF) (approx. 1 kHz) detection signal bridge resolution becomes independent on parasitic capacitances present in the measurement setup. We present characterisation of the setup and metrological description—with resolution of the system 0.6 mK with sensitivity as low as 5 mV K−1. Transformer bridge setup brings galvanic separation, enabling measurements in various environments, pursued for purposes of molecular biology. We present results SThM measurement results of high-thermal contrast sample of carbon fibres in an epoxy resin. Finally, we analyse influence of thermal imaging on topography imaging in terms of information channel capacity. We state that transformer bridge-based SThM system is a fully functional design along with low driving frequencies and resistive thermal nanoprobes by Kelvin Nanotechnology.
Liu Yang et al 2024 Meas. Sci. Technol. 35 086107
To tackle the challenges of performing early fault warning and improving the prediction accuracy for the remaining useful life (RUL) of rolling bearings, this paper proposes a similarity health indicator and a predictive model of CG-conditional generative adversarial network (CGAN), which relies on a CGAN that combines one-dimensional convolutional neural network (CNN) with a bidirectional gate recurrent unit (Bi-GRU). This framework provides a comprehensive theoretical foundation for RUL prediction of rolling bearings. The similarity health indicator allows for early fault warning of rolling bearings without expert knowledge. Within the CGAN framework, the inclusion of constraints guides the generation of samples in a more targeted manner. Additionally, the proposed CG-CGAN model incorporates Bi-GRU to consider both forward and backward information, thus improving the precision of RUL forecasting. Firstly, the similarity indicator between the vibration signals of the rolling bearing over its full life span and the standard vibration signals (healthy status) is calculated. This indicator helps to determine the early deterioration points of the rolling bearings. Secondly, the feature matrix composed of traditional health indicators and similarity health indicator, is utilized to train and test the proposed CG-CGAN model for RUL prediction. Finally, to corroborate the efficacy of the proposed method, two sets of real experiment data of rolling bearing accelerated life from the Intelligent Maintenance Systems (IMS) are utilized. Experimental findings substantiate that the proposed similarity health indicator offers early fault alerts and precisely delineates the performance diminution of the rolling bearing. Furthermore, the put-forward CG-CGAN model achieves high-precision RUL prediction of rolling bearing.
Yongjian Zhang et al 2024 Meas. Sci. Technol. 35 085103
Due to factors such as high temperatures, elevated pressures, and severe high-frequency shocks in the bore, there is considerable noise interference in acceleration test signals. This makes it challenging to accurately measure projectile motion acceleration using existing methods. To address this issue, we propose an advanced measurement approach that uses bottom pressure correction. Our model suggests a significant correlation between projectile motion acceleration and thrust. By utilizing the correlations between bottom pressure and motion acceleration in both temporal and frequency domains, we can improve the accuracy of acceleration measurements. Based on these insights, we have developed a novel testing system that synchronously measures bottom pressure and acceleration, using bottom pressure as a corrective mechanism for the measured acceleration signals. Empirical results show that the maximum relative error in peak motion acceleration is only 4.86%, demonstrating the effectiveness of our proposed method.
Junhua Luo et al 2024 Meas. Sci. Technol. 35 086102
This research aimed to investigate the accuracy of picking of P-wave arrival times in rock fracture acoustic emission signals. In order to simulate the mining scenario, Gaussian white noise and pulse noise were added to the data collected in the laboratory. Complete ensemble empirical mode decomposition with adaptive noise + Wavelet (CEEMDAN + Wavelet) was improved in this paper, where the Spearman rank correlation coefficient was adopted to effectively select intrinsic mode functions for denoising which retained the inherent characteristics of the rock fracture signal. The absolute amplitude and energy change rate of the envelope signal, calculated based on the Hilbert transform, were used as the input of the short term average/long term average (STA/LTA) normalization algorithm to pickup the P-wave arrival time. The reliability of this method was tested on 30 groups of recorded rock fracture laboratory data and 60 groups of added noise data. Taking the manual pickup results as the standard, the errors of CEEMDAN + Wavelet + STA/LTA + AIC (Akaike information criterion) method with the absolute amplitude of the signal as the input are all within 10 ms, and 86.67% of the results are within 5 ms. The method proposed in this paper effectively addressing the issue of false pickup caused by the sensitivity of AIC and traditional STA/LTA method for strong noise, and achieving relatively high accuracy and stability in processing low signal-to-noise ratio signals. This work contributes to monitor microscopic changes in rock bodies and is of great significance for the prediction and monitoring of geological disasters.
Xiao Niu et al 2024 Meas. Sci. Technol. 35 085102
Temperature field measurements are of great significance in industrial production, scientific research, and other fields. Contact temperature measurements usually achieve high spatial resolution by increasing the layout density of the sensors. However, in practical applications, achieving high-density sensor placement is often difficult. Therefore, when the number of sensors is limited, it is necessary to perform function fitting or interpolation of the temperature of the sampling points to reconstruct the temperature field distribution. This study proposed a temperature field reconstruction method based on femtosecond laser prepared multiwavelength fibre Bragg grating (FBG) array. A multiwavelength FBG array was prepared using the femtosecond laser phase mask method combined with stress stretching, and applied for measuring the temperature field distribution in a tubular high-temperature furnace. Cubic spline interpolation and backpropagation (BP) neural networks were used to construct two-dimensional temperature field models for the temperature field distribution data measured by the FBGs, and the prediction accuracies of the two models were compared. The test results show that the root mean square error of the temperature field distribution constructed using the BP neural network is 0.7333 °C, which is approximately 23.18% of the predicted results of the cubic spline interpolation model, indicating that it is a high-precision temperature field reconstruction method.
Open all abstracts, in this tab
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Zheyu Wang et al 2024 Meas. Sci. Technol. 35 052003
The market for service robots is expanding as labor costs continue to rise. Faced with intricate working environments, fault detection and diagnosis are crucial to ensure the proper functioning of service robots. The objective of this review is to systematically investigate the realm of service robots' fault diagnosis through the application of Structural Topic Modeling. A total of 289 papers were included, culminating in ten topics, including advanced algorithm application, data learning-based evaluation, automated equipment maintenance, actuator diagnosis for manipulator, non-parametric method, distributed diagnosis in multi-agent systems, signal-based anomaly analysis, integrating complex control framework, event knowledge assistance, mobile robot particle filtering method. These topics spanned service robot hardware and software failures, diverse service robot systems, and a range of advanced algorithms for fault detection in service robots. Asia-Pacific, Europe, and the Americas, recognized as three pivotal regions propelling the advancement of service robots, were employed as covariates in this review to investigate regional disparities. The review found that current research tends to favor the use of artificial intelligence (AI) algorithms to address service robots' complex system faults and vast volumes of data. The topics of algorithms, data learning, automated maintenance, and signal analysis are advancing with the support of AI, gaining increasing popularity as a burgeoning trend. Additionally, variations in research focus across different regions were found. The Asia-Pacific region tends to prioritize algorithm-related studies, while Europe and the Americas show a greater emphasis on robot safety issues. The integration of diverse technologies holds the potential to bring forth new opportunities for future service robot fault diagnosis.Simultaneously, regional standards about data, communication, and other aspects can streamline the development of methods for service robots' fault diagnosis.
Open all abstracts, in this tab
Fortmeier et al
High-quality aspherical and freeform surfaces are in high demand, and the high-accuracy form measurement of such surfaces is a challenging task. To explore the current status of form measurement systems for complex surfaces such as aspheres and freeforms, interlaboratory comparison measurements are performed. This study presents the pseudonymized results obtained using three different surfaces (metal asphere, glass asphere, toroidal surface) in a total of six different round robins. These results were taken from a total of 13 different measurement instruments based on 9 different measurement principles and operated at 12 different laboratories.
They were analyzed using a sophisticated procedure that was first developed in 2018 and then refined and tested on simulated data in 2022 to address the challenges of such a comparison at this level of accuracy.
In the current study, we applied these refined methods to data acquired from tactile and optical point measurements as well as from optical areal measurements. As there are no absolutely measured and very well characterized reference standard aspherical and freeform surfaces available at the accuracy level of a few tens of nanometers root-mean-square, the approximated true forms of the surfaces were derived from the measurements and indicate the manufacturing accuracy of the surface forms. Then, the measurement's differences to the approximated true forms were analyzed, which directly indicate the systematic measurement errors of the instruments.
By also comparing the approximated true forms from the two different round robins for each surface, additional insights into the reliability and stability of these so-called virtual reference topographies (VRTs) were gained.
Keck et al
A new Kibble balance is being built at the National Institute of Standards and Technology (NIST). For the first time in one of the highly accurate versions of this type of balance, a single flexure mechanism is used for both modes of operation: the weighing mode and the velocity mode. The mechanism is at the core of the new balance design as it represents a paradigm shift for NIST away from using knife edge-based balance mechanisms, which exhibit hysteresis in the measurement procedure of the weighing mode. Mechanical hysteresis may be a limiting factor in the performance of highly accurate Kibble balances approaching single digit nanonewton repeatability on a nominal 100 g mass, as targeted in this work. Flexure-based mechanisms are known to have very good static hysteresis when used as a null detector. However, for larger and especially longer lasting deformations, flexures are known to exhibit anelastic drift. We seek to characterize, and ideally compensate for, this anelastic behavior after deflections during the velocity mode to enable a 10-8 accurate Kibble balance-measurement on a nominal 100 g mass artifact with a single flexure-based balance mechanism.
Huang et al
The precision and stability of anomaly detection methods are vital for the secure and efficient operation of machinery. In this paper, finite element model is firstly used to analyze the shaft orbit of cantilever rotor from the perspective of fault mechanism. Then shaft orbit generative adversarial network is proposed and applied to detect the blade fouling fault. Variational autoencoder is used as the foundational network architecture for extracting high-dimensional latent features from shaft orbit images. Concurrently, the seventh-order moment of shaft orbit images is extracted and embedded into a bypass within the generator, thereby enhancing the accuracy of fault detection. Two sets of real-world blade fouling fault data are collected and meticulously analyzed. The results demonstrate that the proposed method exhibits higher accuracy and more robust generalization capability in anomaly detection. Additionally, the utilization of gradient information for the localization and visual analysis of anomalies dynamically tracks the spatial evolution of the rotor shaft orbit throughout its entire lifecycle. The data generation capability and interpretability of the proposed model can effectively support the digital twin modeling and health management of rotating machinery.
Li et al
Due to the fragility of single-sensor positioning technology in complex scenarios, especially in complex urban areas, multi-sensor positioning technology is becoming increasingly popular. To further improve the robustness of the positioning system by fully utilizing the information from various sensors, this article proposes a differential-GNSS-visual-inertial navigation system (DGVINS) that tightly fuses differential global navigation satellite system (GNSS), vision and inertial information to provide accurate, robust and seamless position information for intelligent navigation applications. DGVINS effectively utilizes all sensor measurements within the factor graph optimization (FGO) framework. When using the carrier phase of GNSS, single-epoch ambiguity optimization is employed to prevent cycle slip detection and adapted to complex environments. We conducted experiments on public datasets with various features and compared the performance of simple differential-GNSS (DGNSS), DGNSS+Inertial, and the state-of-the-art GNSS-visual-inertial navigation systems (GVINS). We also compared the performance of different combinations of GNSS differential factors in various environments. Due to the superiority of differential GNSS and its appropriate integration with visual and inertial measurements, the experimental results demonstrate that DGVINS exhibits significant improvements in accuracy, stability, and continuity in both GNSS-challenged and vision-challenged environments.
Fan et al
Dispersion entropy (DE) is widely used to quantify the complexity of nonlinear time series. In order to improve the ability to capture fault characteristics, a novel approach called coded dispersion entropy (CDE) has been introduced in recent years. CDE aims to expand the number of possible dispersion patterns and enhance the encoding of similar dispersion patterns. However, the coding method of CDE ignores the amplitude differences between dispersion elements and average elements, resulting in the inaccurate assignment of samples. Additionally, CDE is unable to extract effective information from other time scales. These limitations hinder the ability of CDE to effectively characterize faults. This paper proposes an improved multiscale coded dispersion entropy (IMCDE) to solve these limitations. The method introduces the interval scaling factor "R" and utilizes the sum and difference between the mean element and R as new coding boundaries. This approach addresses the sensitivity issue of the CDE coding mode towards smaller amplitude differences by the reasonable quadratic coding mode. Additionally, a composite coarse-graining process is introduced to rearrange the first coarse-grained point in sequence, resulting in multiple sets of sequences. The average probability of the same dispersion pattern at each scale is calculated to correct the entropy error. The experimental results from two sets of bearing faults demonstrate the effectiveness of this method in extracting critical features associated with faulty bearings. Furthermore, the method showed higher classification accuracy compared to multiscale dispersion entropy (MDE) and multiscale coded dispersion entropy (MCDE). Additionally, it exhibited smaller classification error than MDE and MCDE.
Open all abstracts, in this tab
Bartosz Pruchnik et al 2024 Meas. Sci. Technol. 35 085901
Scanning probe microscopy (SPM) is a broad family of diagnostic methods. Common restraint of SPM is only surficial interaction with specimen, especially troublesome in case of complex volumetric systems, e.g. microbial or microelectronic. Scanning thermal microscopy (SThM) overcomes that constraint, since thermal information is collected from broader space. We present transformer bridge-based setup for resistive nanoprobe-based microscopy. With low-frequency (LF) (approx. 1 kHz) detection signal bridge resolution becomes independent on parasitic capacitances present in the measurement setup. We present characterisation of the setup and metrological description—with resolution of the system 0.6 mK with sensitivity as low as 5 mV K−1. Transformer bridge setup brings galvanic separation, enabling measurements in various environments, pursued for purposes of molecular biology. We present results SThM measurement results of high-thermal contrast sample of carbon fibres in an epoxy resin. Finally, we analyse influence of thermal imaging on topography imaging in terms of information channel capacity. We state that transformer bridge-based SThM system is a fully functional design along with low driving frequencies and resistive thermal nanoprobes by Kelvin Nanotechnology.
Ines Fortmeier et al 2024 Meas. Sci. Technol.
High-quality aspherical and freeform surfaces are in high demand, and the high-accuracy form measurement of such surfaces is a challenging task. To explore the current status of form measurement systems for complex surfaces such as aspheres and freeforms, interlaboratory comparison measurements are performed. This study presents the pseudonymized results obtained using three different surfaces (metal asphere, glass asphere, toroidal surface) in a total of six different round robins. These results were taken from a total of 13 different measurement instruments based on 9 different measurement principles and operated at 12 different laboratories.
They were analyzed using a sophisticated procedure that was first developed in 2018 and then refined and tested on simulated data in 2022 to address the challenges of such a comparison at this level of accuracy.
In the current study, we applied these refined methods to data acquired from tactile and optical point measurements as well as from optical areal measurements. As there are no absolutely measured and very well characterized reference standard aspherical and freeform surfaces available at the accuracy level of a few tens of nanometers root-mean-square, the approximated true forms of the surfaces were derived from the measurements and indicate the manufacturing accuracy of the surface forms. Then, the measurement's differences to the approximated true forms were analyzed, which directly indicate the systematic measurement errors of the instruments.
By also comparing the approximated true forms from the two different round robins for each surface, additional insights into the reliability and stability of these so-called virtual reference topographies (VRTs) were gained.
Lorenz Keck et al 2024 Meas. Sci. Technol.
A new Kibble balance is being built at the National Institute of Standards and Technology (NIST). For the first time in one of the highly accurate versions of this type of balance, a single flexure mechanism is used for both modes of operation: the weighing mode and the velocity mode. The mechanism is at the core of the new balance design as it represents a paradigm shift for NIST away from using knife edge-based balance mechanisms, which exhibit hysteresis in the measurement procedure of the weighing mode. Mechanical hysteresis may be a limiting factor in the performance of highly accurate Kibble balances approaching single digit nanonewton repeatability on a nominal 100 g mass, as targeted in this work. Flexure-based mechanisms are known to have very good static hysteresis when used as a null detector. However, for larger and especially longer lasting deformations, flexures are known to exhibit anelastic drift. We seek to characterize, and ideally compensate for, this anelastic behavior after deflections during the velocity mode to enable a 10-8 accurate Kibble balance-measurement on a nominal 100 g mass artifact with a single flexure-based balance mechanism.
Kanglin Xing et al 2024 Meas. Sci. Technol.
The telescoping ballbar is widely utilized for diagnosing accuracy and identifying faults in machine tools and industrial robots. Currently, there are no established standards for determining the optimal feeding speed for ballbar tests. This lack of clear guidelines results in time inefficiency in measurements and inconsistencies in dynamic measurements, which complicates the comparison of ballbar test results under various conditions or across different machine platforms. To mitigate dynamic variations in ballbar results, an updated ballbar data processing method that integrates the unscented Kalman filter (UKF) and particle swarm optimization (PSO) was developed and validated using real ballbar data measured at multiple feeding speeds and simulated data with varying vibration magnitudes generated through the Renishaw ballbar simulator. Experimental results revealed that the dynamic components extracted from the ballbar results were observed to increase in correlation with the vibration measured at different feeding speeds and from the simulations. Moreover, the variations in the results measured at different feeding speeds after PSO-UKF processing were significantly reduced. The findings confirm the effectiveness of the proposed method in minimizing the dynamics of the ballbar results. Ultimately, this approach enhances the efficiency and accuracy of ballbar testing and offers a general method for improved diagnostics.
Min Jing et al 2024 Meas. Sci. Technol.
Fluorescence lifetime is the main characteristic parameter of fluorescence. It is a widely used to draw fluorescence lifetime attenuation curves and to fit fluorescence lifetime parameters by using gated detection methods to identify the species of substances. However, the fluorescence attenuation of each fluorophore in a multi-component compound interferes with one another, affecting the accuracy of identification. In this paper, we propose a method to accurately identify substances by using the occurrence time of the secondary crest of the fluorescence lifetime attenuation curve based on the principle of gated detection to measure the fluorescence lifetime. Furthermore, we design a fluorescence lifetime imaging measurement system and select the same areas of interest in the images for analysis and comparison. The average lifetime of the fluorescence and the occurrence time of the secondary crest are considered as the characteristic parameters. We use five commercially available motor engine oils as the experimental samples and compare the recognition performance of different kernel functions based on a support vector machine (SVM). The radial basis kernel function (RBF) presents the best performance in terms of recognition accuracy and speed. The recognition rates of the SVM model with the average fluorescence lifetime and the occurrence time of the secondary crest in the attenuation curve of the fluorescence lifetime as a feature vector are 76.24% and 74.65%, respectively. The recognition rate of the SVM model which combines them as feature vectors reaches 91.88%. The experimental results demonstrate that the occurrence time of the secondary crest in the attenuation curve of the fluorescence lifetime can be employed as the basis for substance identification in the analysis of the fluorescence characteristics of multi-component compounds, whose recognition accuracy is similar to the average fluorescence lifetime parameter. Moreover, the occurrence time of the secondary crest of the fluorescence lifetime attenuation curve can be implemented to identify multi-component compounds when it is used as a characteristic parameter.
Marcus Soter et al 2024 Meas. Sci. Technol.
Platelets are activated immediately when contacting with non-physiological surfaces. Minimization of surface-induced platelet activation is important not only for platelet storage but also for other blood-contacting devices and implants. Chemical surface modification tunes the response of cells to contacting surfaces, but it requires a long process involving many regulatory challenges to transfer into a marketable product. Biophysical modification overcomes these limitations by modifying only the surface topography of already approved materials. The available large and random structures on platelet storage bags do not cause a significant impact on platelets because of their smallest size (only 1-3 µm) compared to other cells. We have recently demonstrated the feasibility of the mask-free nanoprint fluid force microscope (FluidFM) technology for writing dot-grid and hexanol structures. Here, we demonstrated that the technique allows the fabrication of nanostructures of varying features including grid, circle, triangle, and Pacman-like structures. Characteristics of nanostructures including height, width, and cross-line were analyzed and compared using atomic force microscopy imaging. Based on the results, we identified several technical issues, such as the printing direction and shape of structures that directly altered nanofeatures during printing. Importantly, both geometry and interspace governed the degree of platelet adhesion, especially, the structures with triangular shapes and small interspaces prevent platelet adhesion better than others. We confirmed that FluidFM is a powerful technique to precisely fabricate a variety of desired nanostructures for the development of platelet/blood-contacting devices if technical issues during printing are well controlled.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Isaac Spotts et al 2024 Meas. Sci. Technol. 35 075208
To improve the temporal resolution in an optical delay system that uses a conventional mechanical delay stage, we integrate an in-line liquid crystal (LC) wave retarder. Previous implementations of LC optical delay methods are limited due to the small temporal window provided. Using a conventional mechanical delay stage system in series with the LC wave retarder, the temporal window is lengthened. Additionally, the limitation on temporal resolution resulting from the minimum optical path alteration (resolution of 400 nm) of the conventionally used mechanical delay stage is reduced via the in-line wave retarder (resolution of 50 nm). Interferometric autocorrelation measurements are conducted at multiple laser emission frequencies (349, 357, 375, 394, and 405 THz) using the in-line LC and conventional mechanical delay stage systems. The in-line LC system is compared to the conventional mechanical delay stage system to determine the improvements in temporal resolution relating to maximum resolvable frequency. This work demonstrates that the integration of the in-line LC system can extend the maximum resolvable frequency from 375 to 3000 THz. The in-line LC system is also applied for measurement of terahertz pulses.
Yuvarajendra Anjaneya Reddy et al 2024 Meas. Sci. Technol.
Current optical flow-based neural networks for Particle Image Velocimetry (PIV) are largely trained on synthetic datasets emulating real-world scenarios. While synthetic datasets provide greater control and variation than what can be achieved using experimental datasets for supervised learning, it requires a deeper understanding of how or what factors dictate the learning behaviors of deep neural networks for PIV. In this study, we investigate the performance of the Recurrent All-Pairs Field Transforms (RAFT-PIV) network, the current state-of-the-art deep learning architecture for PIV, by testing it on unseen experimentally generated datasets. The results from RAFT-PIV are compared with a conventional cross-correlation-based method, Adaptive PIV. The experimental PIV datasets were generated for a typical scenario of flow past a circular cylinder in a rectangular channel. These test datasets encompassed variations in particle diameters, particle seeding densities, and flow speeds, all falling within the parameter range used for training RAFT-PIV. We also explore how different image pre-processing techniques can impact and potentially enhance the performance of RAFT-PIV on real-world datasets. Thorough testing with real-world experimental PIV datasets reveals the resilience of the optical flow-based method's variations to PIV hyperparameters, in contrast to the conventional PIV technique. The ensemble-averaged Root Mean Squared Errors (RMSE) between the RAFT-PIV and Adaptive PIV estimations generally range between 0.5 to 2 [px] and show a slight reduction as particle densities increase or Reynolds numbers decrease. Furthermore, findings indicate that employing image pre-processing techniques to enhance input particle image quality doesn't improve RAFT-PIV predictions; instead, it incurs higher computational costs and impacts estimations of small-scale structures.
A Spaett and B G Zagar 2024 Meas. Sci. Technol. 35 075013
Fully developed laser speckle patterns are, due to their high contrast and statistical nature, well suited to measure strain and displacement via an appropriately designed measurement system. Laser speckle patterns are formed when a sufficiently coherent light source, such as a HeNe-laser, illuminates an optically rough surface. Therefore, methods based on laser speckle patterns can be applied to any surface scatterer with a minimum mean surface roughness of about a quarter of the laser's wavelength. This includes also materials such as thin natural and technical fibres as well as foils, for which the presented measurement system, including the digital signal processing, was designed. In order to achieve the best possible resolution of a speckle-based measurement system, combined with a sufficiently small measurement uncertainty, all available design parameters must be optimised. One of these parameters is the speckle size, which is dependant on the properties of the imaging optics. In this paper a subjective laser speckle-based measurement system based on a so-called 4f-optical setup is presented. This setup allows the speckle size to be controlled in both axial and lateral dimensions separately, which is achieved with the help of an aperture in the Fourier plane of the optics. It is shown that the optimal speckle size for the presented measurement system, not only depends on the physical setup, but also on the signal processing applied. The signal processing routine estimates displacements of the speckle pattern, leading to an estimate for the strain. Additionally, it is demonstrated that the optimal speckle size can be lower than the commonly reported optimum between two and five pixel pitches, necessary to circumvent aliasing in the image data. While this is shown for a measurement setup using 4f-optics, the results are of general importance to speckle-based strain or displacement measurement systems and should thus be taken into account.