A classic state estimation method, the Kalman filter integrates prior information, system dynamics models, and measurement data to achieve posterior state estimations. However, measurements often encounter various unknown disturbances, leading to abnormal or inaccurate measurements and subsequently impacting the performance of Kalman filtering. In response to this challenge, this paper introduces a novel adaptive Kalman filter approach aided by posterior state estimations using the Long short-term memory (LSTM) networks. The proposed method begins by utilizing prior residuals to construct a chi-square distribution model, which facilitates the detection of abnormal measurements within data. Upon identifying abnormal measurements, an LSTM network is employed to generate alternative predictive measurements, replacing the original inaccurate measurement. This approach enhances the model's capability to handle complex relationship and measurement uncertainties and improves filtering estimation accuracy. A noise covariance adjustment method is introduced in extreme cases where alternative predictive measurements alone are insufficient for filter requirements. This method mitigates the adverse effects of abnormal measurements on posterior estimation. Throughout the entire adaptive assistance process involving the LSTM network, deep learning maintains a stable structure of the filter while enhancing adaptability. This strategy ensures the filter's resilience in dynamic environments with unknown factors by providing predicted measurements conforming to the chi-square distribution and corresponding measurement noise covariance. Ultimately, the efficacy of the proposed algorithm is validated through simulations and experiments involving vehicle positioning with inertial sensors.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Shu Yin et al 2024 Meas. Sci. Technol. 35 075113
WenTong Yi et al 2024 Meas. Sci. Technol. 35 075112
Heat flux is a physical quantity that characterises the strength of heat transfer per unit area. Heat transfer occurs with a temperature difference. Herein, a remote heat flux measurement method based on the thermomagnetic effect is proposed. Using Bloch's law, Faraday's law of electromagnetic induction, and Fourier's law, a temperature measurement model from heat flux to magnetisation of the magnetic film and finally to the induced electromotive force in the Helmholtz coil is established. A compression corner was fabricated, and the heat flux measurement method was experimentally verified in a Mach 6 shock tube wind tunnel. In addition, a Schlieren experiment was performed for comparison. In the experiments, the temperature resolution of the magnetic film reached 0.36 K, and the temperature response reached 100 kHz. The range of the proposed heat flux measurement method is up to 5 , and the measurement error is less than 10%. At a resolution of 10 μs, the change in heat flow is mapped when there is a transition for the magnetic film from flat to turbulent flow, which shows the different effects of hypersonic heat flux on the boundary layer and attachment position. The magnetic thin film shows the potential for heat flux estimation in our experiment.
Arash Nemati et al 2024 Meas. Sci. Technol. 35 075405
Neutron imaging has gained increasing attention in recent years. A notable domain is the in-situ study of flow and concentration of hydrogen-rich materials. This demands precise quantification of the evolving concentrations. Several implementations deviate from the ideal conditions that allow the direct applicability of the Beer–Lambert law to assess this concentration. The objective of this work is to address these deviations by applying both calibration and correction procedures to ensure and validate accurate quantitative measurements during 2D and 3D neutron imaging conducted at the cold neutron source at the NeXT instrument of the Institute Laue–Langevin, Grenoble, France. Linear attenuation coefficients and non-linear correlations have been proposed to measure the water concentration based on the sample-to-detector distance. Furthermore, the effectiveness of the black body grid correction method, introduced by Boillat et al (2018 Opt. Express26 15769), is evaluated which accounts for spurious deviations arising from the scattering of neutrons from the sample and the surrounding environment. The applicability of the Beer–Lambert law without any data correction is found to be reasonable within limited equivalent thickness (e.g. below 4 mm of water) beyond which the correction algorithm proves highly effective in eliminating spurious effects. Notably, this correction method maintains its effectiveness even with transmissions below 1%. We examine here the impact of grid location and resolution with respect to sample heterogeneity.
Hu Wang et al 2024 Meas. Sci. Technol. 35 076129
As one of the key components in rotating machinery, the rolling element bearing has been widely used in actual production, such as wind turbines, vehicles and machine tools. A bearing's remaining useful life (RUL) is an important indicator for its performance assessment, which is related to maintenance and production safety. To overcome the insensitivity of the conventional health indicator (HI) on bearing degradation assessment, this study proposes a subspace clustering method based on manifold learning to evaluate the evolution of health status, which describes the degenerate distribution via a two-class model and realizes the identification of the degradation of each stage. Motivated by the inconsistent degradation process in the application of actual bearing, this study proposes a multi-stage degradation identification criterion in an adaptive way, which can effectively identify different degradation rates of bearing. Based on the different degradation states, a multi-stage degradation exponential model is established to accurately predict the RUL. The effectiveness of the proposed method is validated through open datasets. The experimental results prove that the proposed method can effectively identify different degradation rates and accurately give the boundary time of the multi-stage degradation. The RUL prediction accuracy is significantly improved compared with traditional HI.
Yun Wang et al 2024 Meas. Sci. Technol. 35 075209
In traditional stitching measurements, the central sub-aperture is usually used as the reference and is not suitable for incomplete spherical surfaces without central sub-aperture. The measurement of each sub-aperture requires manual readjusting the position and attitude of the measured component, resulting in low measurement accuracy and low measurement efficiency. In response to this issue, this study proposed a method based on confocal focusing for large aperture angle incomplete spherical surface stitching measurement. Automatically and accurately determine the position of the common focus through confocal focusing technology, the positioning accuracy of the measured component is improved. A sub-aperture stitching model was built, coordinate mapping rotation algorithm and overlapping area error compensation algorithm were applied to the surface shape of each sub-aperture, achieving stitching measurement of large aperture angle incomplete spherical surfaces. Finally, the confocal interference stitching measurement system was built to carry out stitching experiments of sphere and large aperture angle incomplete spherical surface. The stitching experimental results indicated that the method increases the measurement accuracy of PV by 2.2 times, increases the measurement accuracy of RMS by 2.3 times, and increases the measurement efficiency by 1.6 times. Therefore, this method provides a high-precision measurement method for detecting large aperture angle spherical surface shape.
Open all abstracts, in this tab
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Zheyu Wang et al 2024 Meas. Sci. Technol. 35 052003
The market for service robots is expanding as labor costs continue to rise. Faced with intricate working environments, fault detection and diagnosis are crucial to ensure the proper functioning of service robots. The objective of this review is to systematically investigate the realm of service robots' fault diagnosis through the application of Structural Topic Modeling. A total of 289 papers were included, culminating in ten topics, including advanced algorithm application, data learning-based evaluation, automated equipment maintenance, actuator diagnosis for manipulator, non-parametric method, distributed diagnosis in multi-agent systems, signal-based anomaly analysis, integrating complex control framework, event knowledge assistance, mobile robot particle filtering method. These topics spanned service robot hardware and software failures, diverse service robot systems, and a range of advanced algorithms for fault detection in service robots. Asia-Pacific, Europe, and the Americas, recognized as three pivotal regions propelling the advancement of service robots, were employed as covariates in this review to investigate regional disparities. The review found that current research tends to favor the use of artificial intelligence (AI) algorithms to address service robots' complex system faults and vast volumes of data. The topics of algorithms, data learning, automated maintenance, and signal analysis are advancing with the support of AI, gaining increasing popularity as a burgeoning trend. Additionally, variations in research focus across different regions were found. The Asia-Pacific region tends to prioritize algorithm-related studies, while Europe and the Americas show a greater emphasis on robot safety issues. The integration of diverse technologies holds the potential to bring forth new opportunities for future service robot fault diagnosis.Simultaneously, regional standards about data, communication, and other aspects can streamline the development of methods for service robots' fault diagnosis.
Open all abstracts, in this tab
Nie et al
In the realm of mechanical machining, tool wear is an unavoidable phenomenon. Monitoring the condition of tool wear is crucial for enhancing machining quality and advancing automation in the manufacturing process. This paper investigates an innovative approach to tool wear monitoring that integrates machine vision with force signal analysis. It relies on a deep residual two-stream convolutional model optimized with the scSE(Concurrent Spatial and Channel Squeeze and Excitation) attention mechanism (scSE-ResNet-50-TSCNN). The force signals are converted into the corresponding wavelet scale images following wavelet threshold denoising and Continuous Wavelet Transform (CWT). Concurrently, the images undergo processing using Contrast Limited Adaptive Histogram Equalization (CLAHE) and the Structural Similarity Index Method (SSIM), allowing for the selection of the most suitable image inputs. The processed data are subsequently input into the developed scSE-ResNet-50-TSCNN model for precise identification of the tool wear state. To validate the model, the paper employed X850 carbon fibre reinforced polymer (CFRP) and Ti-6Al-4V titanium alloy as laminated experimental materials, conducting a series of tool wear tests while collecting pertinent machining data. The experimental results underscore the model's effectiveness, achieving an impressive recognition accuracy of 93.86%. When compared with alternative models, the proposed approach surpasses them in performance on the identical dataset, showcasing its efficient monitoring capabilities in contrast to single-stream networks or unoptimized networks. Consequently, it excels in monitoring tool wear status and promots crucial technical support for enhancing machining quality control and advancing the field of intelligent manufacturing.
Dong et al
Flexible piezoelectric sensors are widely used in various applications such as physiological signal monitoring and human-computer interaction. The present study introduces a BaTiO3-CNT/RTV piezoelectric sensor fabricated using a filter paper template. It incorporates micro-scale fiber stacking and a 1% CNT doping in the microstructure, resulting in a notable enhancement of sensor sensitivity, increasing it from 0.07 V/N to 0.69 V/N, representing an almost tenfold improvement. Furthermore, the study investigates the influence of affecting factors like the flexible substrate of the sensing film, thickness, and mass fractions of various materials on the output voltage. The sensor exhibits superior characteristics such as good repeatability under 5000 cyclic loads, high elongation at break, fast response (80 ms) and recovery times (90 ms), and good linearity. It also demonstrates outstanding sensitivity (12 mV/10°) when monitoring different finger bending states, enabling real-time, sensitive, and reliable hand motion tracking. This sensor holds promising prospects for future developments in the fields of intelligent grasping and sign language translation.
Yang et al
In the civil aviation system, the Inertial Measurement Unit (IMU)/ Global Navigation Satellite System (GNSS) integrated navigation remains the most effective navigation scheme. Based upon this, the utilization of redundant IMUs and a multi-constellation system can significantly enhance navigation and integrity performance. However, a redundant IMUs/GNSS integrated navigation system is susceptible to more complex failure modes, including GNSS faults (satellite and constellation faults) and IMU fault. Additionally, dealing with correlated measurements between redundant construction presents a challenging issue. Therefore, we propose an integrity monitoring method for decentralized redundant IMUs/GNSS integrated navigation system. This method considers GNSS faults and IMU fault in the integrity monitoring process, specifically designed for decentralized filter construction with redundancy. Furthermore, due to the correlated measurements between redundant filters, optimal fusion coefficients are determined by considering the unbiased estimate and minimizing the trace of covariance of primary filter. An optimal allocation scheme of continuity risk is also established to improve navigation and integrity performance. Simulation results demonstrate the feasibility and effectiveness of the proposed integrity monitoring method. Moreover, in the context of correlated measurements, the integrated navigation system also exhibits superiority.
Shi et al
Achieving precision in positioning under conditions of significant interference remains an unresolved challenge in research. This study introduces a low-cost Ultra-Wideband (UWB) distance compensation model that addresses electromagnetic wave loss in practical indoor settings. This paper employs kurtosis to detect Non-Line-of-Sight (NLOS) environments, which are frequently induced by pedestrian movement. The Generalized Extreme Studentized Deviate (GESD) algorithm is utilized to discern and eliminate outliers in ranging values and the Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) algorithm compensates for the eliminated data points. Finally, Kalman filtering is used to improve UWB ranging results, allowing for better error elimination and compensation. Experimental results demonstrate that our proposed algorithm has higher accuracy and the mean square error improvement ratio can reach more than 20% in dynamic positioning tests.
Bai et al
In this paper, we propose and design a novel dual-range Tunnel Magnetoresistance (TMR) current sensor with a single magnetic ring structure. This design incorporates two distinct magnetic guiding effects, namely magnetic shunt and magnetic aggregation, within the same magnetic ring. By integrating a high-sensitivity TMR sensor chip with a closed-loop feedback circuit, we achieve a TMR current sensor with excellent linearity, high resolution, as well as high frequency response. The magnetic ring structure is first modeled and simulated, establishing a correlation between the distribution of magnetic induction intensity and the parameters of the magnetic ring and feedback coils. Through simulation optimization and theoretical calculations, we determine the optimal positions for TMR sensor chips in the magnetic ring, suitable for both current ranges. When a signal current is present, the TMR sensor chip generates a weak differential voltage signal, which is subsequently amplified, processed, and automatically transmitted to the laptop via a serial port. Furthermore, the sensor allows for automatic switching between the two current ranges. The results demonstrate that our designed dual-range current sensor exhibits outstanding performance characteristics, including a high resolution of 500 μA in the small range, accuracy of 0.10%, excellent linearity of 0.011%, and a fast frequency response of 500 kHz. These features make it highly applicable in various fields such as new energy vehicles and smart grids, indicating promising prospects for its widespread utilization.
Open all abstracts, in this tab
Marcus Soter et al 2024 Meas. Sci. Technol.
Platelets are activated immediately when contacting with non-physiological surfaces. Minimization of surface-induced platelet activation is important not only for platelet storage but also for other blood-contacting devices and implants. Chemical surface modification tunes the response of cells to contacting surfaces, but it requires a long process involving many regulatory challenges to transfer into a marketable product. Biophysical modification overcomes these limitations by modifying only the surface topography of already approved materials. The available large and random structures on platelet storage bags do not cause a significant impact on platelets because of their smallest size (only 1-3 µm) compared to other cells. We have recently demonstrated the feasibility of the mask-free nanoprint fluid force microscope (FluidFM) technology for writing dot-grid and hexanol structures. Here, we demonstrated that the technique allows the fabrication of nanostructures of varying features including grid, circle, triangle, and Pacman-like structures. Characteristics of nanostructures including height, width, and cross-line were analyzed and compared using atomic force microscopy imaging. Based on the results, we identified several technical issues, such as the printing direction and shape of structures that directly altered nanofeatures during printing. Importantly, both geometry and interspace governed the degree of platelet adhesion, especially, the structures with triangular shapes and small interspaces prevent platelet adhesion better than others. We confirmed that FluidFM is a powerful technique to precisely fabricate a variety of desired nanostructures for the development of platelet/blood-contacting devices if technical issues during printing are well controlled.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Isaac Spotts et al 2024 Meas. Sci. Technol. 35 075208
To improve the temporal resolution in an optical delay system that uses a conventional mechanical delay stage, we integrate an in-line liquid crystal (LC) wave retarder. Previous implementations of LC optical delay methods are limited due to the small temporal window provided. Using a conventional mechanical delay stage system in series with the LC wave retarder, the temporal window is lengthened. Additionally, the limitation on temporal resolution resulting from the minimum optical path alteration (resolution of 400 nm) of the conventionally used mechanical delay stage is reduced via the in-line wave retarder (resolution of 50 nm). Interferometric autocorrelation measurements are conducted at multiple laser emission frequencies (349, 357, 375, 394, and 405 THz) using the in-line LC and conventional mechanical delay stage systems. The in-line LC system is compared to the conventional mechanical delay stage system to determine the improvements in temporal resolution relating to maximum resolvable frequency. This work demonstrates that the integration of the in-line LC system can extend the maximum resolvable frequency from 375 to 3000 THz. The in-line LC system is also applied for measurement of terahertz pulses.
Yuvarajendra Anjaneya Reddy et al 2024 Meas. Sci. Technol.
Current optical flow-based neural networks for Particle Image Velocimetry (PIV) are largely trained on synthetic datasets emulating real-world scenarios. While synthetic datasets provide greater control and variation than what can be achieved using experimental datasets for supervised learning, it requires a deeper understanding of how or what factors dictate the learning behaviors of deep neural networks for PIV. In this study, we investigate the performance of the Recurrent All-Pairs Field Transforms (RAFT-PIV) network, the current state-of-the-art deep learning architecture for PIV, by testing it on unseen experimentally generated datasets. The results from RAFT-PIV are compared with a conventional cross-correlation-based method, Adaptive PIV. The experimental PIV datasets were generated for a typical scenario of flow past a circular cylinder in a rectangular channel. These test datasets encompassed variations in particle diameters, particle seeding densities, and flow speeds, all falling within the parameter range used for training RAFT-PIV. We also explore how different image pre-processing techniques can impact and potentially enhance the performance of RAFT-PIV on real-world datasets. Thorough testing with real-world experimental PIV datasets reveals the resilience of the optical flow-based method's variations to PIV hyperparameters, in contrast to the conventional PIV technique. The ensemble-averaged Root Mean Squared Errors (RMSE) between the RAFT-PIV and Adaptive PIV estimations generally range between 0.5 to 2 [px] and show a slight reduction as particle densities increase or Reynolds numbers decrease. Furthermore, findings indicate that employing image pre-processing techniques to enhance input particle image quality doesn't improve RAFT-PIV predictions; instead, it incurs higher computational costs and impacts estimations of small-scale structures.
A Spaett and B G Zagar 2024 Meas. Sci. Technol. 35 075013
Fully developed laser speckle patterns are, due to their high contrast and statistical nature, well suited to measure strain and displacement via an appropriately designed measurement system. Laser speckle patterns are formed when a sufficiently coherent light source, such as a HeNe-laser, illuminates an optically rough surface. Therefore, methods based on laser speckle patterns can be applied to any surface scatterer with a minimum mean surface roughness of about a quarter of the laser's wavelength. This includes also materials such as thin natural and technical fibres as well as foils, for which the presented measurement system, including the digital signal processing, was designed. In order to achieve the best possible resolution of a speckle-based measurement system, combined with a sufficiently small measurement uncertainty, all available design parameters must be optimised. One of these parameters is the speckle size, which is dependant on the properties of the imaging optics. In this paper a subjective laser speckle-based measurement system based on a so-called 4f-optical setup is presented. This setup allows the speckle size to be controlled in both axial and lateral dimensions separately, which is achieved with the help of an aperture in the Fourier plane of the optics. It is shown that the optimal speckle size for the presented measurement system, not only depends on the physical setup, but also on the signal processing applied. The signal processing routine estimates displacements of the speckle pattern, leading to an estimate for the strain. Additionally, it is demonstrated that the optimal speckle size can be lower than the commonly reported optimum between two and five pixel pitches, necessary to circumvent aliasing in the image data. While this is shown for a measurement setup using 4f-optics, the results are of general importance to speckle-based strain or displacement measurement systems and should thus be taken into account.
Ata Can Çorakçı et al 2024 Meas. Sci. Technol.
In this paper, application of a Two-Equations Two-Unknowns (2E-2U) method is described for calibration of hydrophones and projectors below 1 kHz in a laboratory test tank. At low frequencies, amplitude and phase measurements for the calibration of the hydrophones and projectors in the test tank are diffucult to perform since echo-free time of the laboratory test tank is not large enough due to transducer initial transients and tank wall boundary reflections. To overcome these diffuculties, the 2E-2U method is applied to received (windowed) signals obtained during calibration measurements. Thus, the calibration measurements become possible at a frequency down to 250 Hz. These measurements in the test tank are performed for a hydrophone and a developed flextensional projector. First, the receive sensitivities for the hydrophone are calculated and validated by comparisons with pressure calibration in a closed chamber. Good agreements are obtained between two measurement platforms, with a maximum difference of 0.5 dB and uncertainty of 1.3 dB. Then, transmitting voltage response (TVR) of the flextensional projector is calculated and compared with the calibration data obtained from the method defined in the relevant standards. Good agreements are obtained between two TVR data with a maximum difference of 1.1 dB and uncertainty of 1.7 dB.
S Soman et al 2024 Meas. Sci. Technol. 35 075905
Inspection of surface and nanostructure imperfections play an important role in high-throughput manufacturing across various industries. This paper introduces a novel, parallelised version of the metrology and inspection technique: Coherent Fourier scatterometry (CFS). The proposed strategy employs parallelisation with multiple probes, facilitated by a diffraction grating generating multiple optical beams and detection using an array of split detectors. The article details the optical setup, design considerations, and presents results, including independent detection verification, calibration curves for different beams, and a data stitching process for composite scans. The study concludes with discussions on the system's limitations and potential avenues for future development, emphasizing the significance of enhancing scanning speed for the widespread adoption of CFS as a commercial metrology tool.
Zelin Zhou et al 2024 Meas. Sci. Technol. 35 076304
Global navigation satellite system (GNSS) positioning performance in the urban dense environment experiences significant deterioration due to frequent non-line-of-sight (NLOS) and multipath errors. An accurate weighting scheme is critical for positioning, especially in urban environment. Traditional methods for determining the weights of observations typically rely on the carrier-to-noise density ratio (C/N0) and the elevations from satellites to receivers. Nevertheless, the performance of these methods is degraded in the dense urban settings, as C/N0 and elevation measurements fail to fully capture the intricacies of NLOS and multipath errors. In this paper, a novel GNSS observations weighting scheme based on Hopular GNSS signal classifier, which can accurately identify the LOS/NLOS signals using medium-sized training dataset, is proposed to improve the urban kinematic navigation solution in real-time kinematic positioning mode. Four GNSS features: C/N0, time-differenced code-minus-carrier, loss of lock indicator and satellite's elevation, are employed in the training of the Hopular based signal classifier. The performance of the new method is validated using two urban kinematic datasets collected by a U-blox F9P receiver with a low-cost antenna, in downtown Calgary. For the first testing dataset, the results show that the Hopular based weighting scheme outperforms the three most commonly used GNSS observations weighting schemes: C/N0, elevation, and a combined C/N0-elevation approach. Approximately 10.089 m of horizontal root-mean-squared (RMS) positioning error and 12.592 m of vertical RMS error are achieved using the proposed method; with improvements of 78.83%, 46.82% and 43.27% on horizontal positioning accuracy and 54.00%, 47.51% and 49.69% on vertical positioning accuracy, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively. For the second testing dataset, a similar performance is achieved with nearly 11.631 m of horizontal RMS error and 10.158 m of vertical RMS error; improvements of 64.58%, 32.90% and 22.40% on horizontal positioning accuracy and 71.99%, 65.24% and 55.88% on vertical positioning accuracy are achieved, compared to using C/N0, elevation and C/N0-elevation combined weighting schemes, respectively.
Jakub Svatos and Jan Holub 2024 Meas. Sci. Technol. 35 076122
This paper analyses the efficiency of various frequency cepstral coefficients (FCC) in a non-speech application, specifically in classifying acoustic impulse events-gunshots. There are various methods for such event identification available. The majority of these methods are based on time or frequency domain algorithms. However, both of these domains have their limitations and disadvantages. In this article, an FCC, combining the advantages of both frequency and time domains, is presented and analyzed. These originally speech features showed potential not only in speech-related applications but also in other acoustic applications. The comparison of the classification efficiency based on features obtained using four different FCC, namely mel-FCC (MFCC), inverse mel-frequency cepstral coefficients (IMFCC), linear-frequency cepstral coefficients (LFCC), and gammatone-frequency cepstral coefficients (GTCC) is presented. An optimal frame length for an FCC calculation is also explored. Various gunshots from short guns and rifle guns of different calibers and multiple acoustic impulse events, similar to the gunshots, to represent false alarms are used. More than 600 acoustic events records have been acquired and used for training and validation of two designed classifiers, support vector machine, and neural network. Accuracy, recall and Matthew's correlation coefficient measure the classification success rate. The results reveal the superiority of GFCC to other analyzed methods.
Geoffrey de Villiers et al 2024 Meas. Sci. Technol.
Gravity measurements have uses in a wide range of fields including geological mapping and mine-shaft inspection. The specific application in question sets limits on the survey and the amount of information that can be obtained. For example, in a conventional gravity survey at the Earth's surface a gravimeter is translated on a two-dimensional planar grid taking measurements of vertical component of gravity. If, however, the survey points cannot be chosen so freely, for example if the gravimeter is constrained to operate in a tunnel where only a one-dimensional line of data could be taken, less information will be obtained. To address this situation, we investigate an alternative approach, in the form of an instrument which rotates around a central point measuring the gravitational potential on the boundary of a sphere around the centre of the instrument. The ability to record additional components of gravity by rotating the gravimeter will give more information than obtained with a single measurement traditionally taken at each point on a survey, consequently reducing ambiguities in interpretation. We term a device which measures the potential, or its radial derivatives, around the surface of a sphere a gravitational eye. In this article we explore ideas of resolution and propose a thought experiment for comparing the performance of diverse types of gravitational eye. We also discuss radial analytic continuation towards sources of gravity and the resulting resolution enhancement, before finally discussing the possibility of using cold-atom gravimetry and gradiometry to construct a gravitational eye. If realised, the gravitational eye will offer revolutionary capability enabling the maximum information to be obtained about features in all directions around it.