To write about the significance of metrology in our high-tech world would mean to bring owls to Athens. Metrology and quality are two sides of the same coin and high quality standards are a must for the majority of manufacturers in all industrial branches. The well-known slogan “If you can’t measure it, you can’t improve it”, attributed to Lord Kelvin, belongs to the most-important quotes in business and expresses clearly the position that decisions without a reliable data basis are fuzzy. For modern production technologies this phrase can be simply rewritten as following: “You cannot produce anything better as you can measure it”. Many adaptions of this wisdom are circulating meanwhile, such as “If you can’t measure it, you can’t make it” or “If you can’t measure it, you can’t manage it”①. The most pertinent expression for this causality is in our opinion probably the one attributed to the American statistician W. Edwards Deming who famously said:“In God we trust, all others must bring data.” Generating, collecting and interpreting data seems to be more important than ever. In times of all-encompassing digitization, IoT-mania, social networks, and mobile communication, the term Big Data refers mainly to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from a data system to find new correlations and to spot business trends, prevent diseases, combat crime and so on18, 19. The analysis of this trend is by no means the subject of our article.
Instead, we focus on data acquisition and evaluation in a very limited environment called optical metrology. The situation here is on the one hand quite different as in the social-economic context of the above mentioned Big Data hype. On the other hand there are some similarities which could be of interest for our topic. In both cases a user or customer, respectively, is involved. While in the first case the client is largely passive and the system examines his behavior in the context of a larger community of acting individuals to come up with suggestions and conclusions, the client in our case is the active player who has a concrete problem (mainly to assess the quality of items or processes) and looks for the best way to solve it. In both cases, the generated data are the primary decision-making basis. These decisions are the subject of the analysis of an inverted causality that is aimed to draw conclusions from the achieved results – the measurement data - to their causes (faults, shapes, deformations, strains, …), see Fig. 1. The difficulties of the so-called identification problems are: instability, ambiguity, sensitivity to noise, and lack of independent information20, 21.
Metrology is not matter of wishful thinking but only matter of facts. This holds especially for optical metrology where we can usually see what we will get. But the belief in pictures is often seductive and the transformation of optical data (intensities, phases, times of flight, ...) into e.g. dimensional data is not even simple due to the mentioned indirect measuring process. Consequently, the discussion of reliable procedures and measures for the evaluation of the performance of measurement systems and the trueness of the derived data may help to improve the trust in optical measurement systems. However, the subject is far from being simple and an exhaustive treatment of that topic needs more substantial investigations and would fill many pages. Several important players, components and procedures are part of the chain from the given object under test and measurement problem to the selection and task-dependent design of a proper measurement system delivering confiding data. Fig. 2 tries to illustrate the strategy of that selection process.
The user or customer has a specific problem where he needs insights about certain properties of the current sample. Usually he discusses that problem with an expert who transforms the global and often fuzzy task into a specification sheet. This specification sheet contains more or less precise information about the data to be measured directly, the principles and devices used for their measurement, the algorithms executing the identification of the best estimate of the desired data – the identification problem which in the epistemological sense represents a question to the causality22, and the methods for the estimation of the reliability of the final results. This is in general a straightforward procedure but strictly dependent from the specific measurement problem. There are many recipes and guidelines for the measurement of any data, and for the calculation, presentation and assessment of the final data which quantify the quality of the object under test as well. See for instance the ISO/BIPM Guide for the Expression of Uncertainty in Measurement23 which is a valuable tool for the estimation of the reliability of the provided data. However, the evaluation of these procedures and guidelines is not the subject of our paper. Our intention is more directed to the discussion of features and measures that can be helpful for the selection and design of a specific measurement system and finally for the estimation and improvement of its performance.
Here we distinguish between general features that can be helpful to characterize the global quality of a measurement system and its 3 main components (see Fig. 2: Illumination, sensing including the selection of the adequate sensors and their path planning24, and the evaluation of the measurement data including their error analysis), and those features that are characteristic for a particular application. For the latter, the following list of specifications should be considered:
− the object under test, its intended use and use conditions as well,
− the quantities or features to be measured, such as dimensional, mechanical, spectral and kinematic quantities, material parameters, material faults, …,
− the measuring environment (cooperative or non-cooperative/harsh environment25),
− the type of interaction with the object, e.g. the appropriate measuring arrangement, the appropriate load of the object and the choice of an appropriate wavelength or sequence of wavelengths26, 27,
− the proper user interface (human-machine-interface28, graphical user interface GUI, and programming language29),
− the process capability of the measurement system (the process capability is a statistical measure that indicates the probability with which the specified goals can be achieved, the so-called statistical process control SPC30),
− the measurement volume and/or the measurement area,
− the surface quality of the object (rough, optically smooth, translucent, …),
− the required measurement speed (e.g. the need to synchronize the measurement with the process timing for in-line measurement processes, or an off-line system decoupled from the process cycle following its own clock rate),
− the required measurement uncertainty31 (this statistical quantity for the estimation of the precision/repeatability of the results is strongly dependent on the measurement conditions and the properties of the measurement object; a rule of thumb says: the accepted uncertainty has to be smaller as a tenths of the tolerances that can be accepted by the system),
− the way of data presentation and subsequent use of data (quantitative, qualitative, lists, graphics, further processing, …).
There is obviously no all-in-one device suitable for every purpose and consequently there is no general rule for the selection and design of a specific measurement system. The variety of objects, measurement conditions and measurement purposes is far too big. Consequently, it is the responsibility of the expert to select the system that is most appropriate for a specific purpose. However, there are several general rules for the design and features for the assessment of optical measurement systems which can be used to objectively judge its performance and to find ways to improve it. Richard Leach et al have proposed 5 general rules for the design of the measurement pipeline in order to achieve better metrological performance, shorter measurement times, and smaller consumption of resources. They call it Information-Rich Manufacturing Metrology (IRM) 32:
− use information on nominal part geometry,
− use information on manufactured part geometry,
− use information from other measurement,
− use information on measurement process,
− use any other source of additional information.
Additional information is always useful to cope with the ill-posedness of the inverse problem33. A successful way for the regularization is the exploitation of the complete information content of a light wave21:
− primary information channels: intensity, frequency, phase, angular spectrum, and polarization,
− angular momentum of light: light spin and light orbital angular momentum,
− time of flight (light in flight),
− correlation properties in the light field: temporal and spatial coherence.
These channels can be processed as individual information carriers such as in interferometry where the phase is reconstructed by several intensity measurements using phase-shifting technologies34, or by a multi-modal data acquisition in the context of a sensor fusion strategy35. For the inspection of extended objects, the limited space bandwidth product of existing sensors is often a handicap for the high-resolution detection of existing faults. A smart combination of various techniques allows the skillful fusion of sequential measurements acquired in different scales and consequently a more effective way to find these faults without a time-consuming scan of the entire object with a single high-resolution method36,37,20.
If one takes into account the big diversity of quantities that can be measured with holographic principles (see Fig. 1) and the always ubiquitous challenge to find the best method suitable for the solution of the concrete problem, then it will be clear that the user needs assistance to make the best possible choice38. The understanding of the features of a measurement system, their origin and influence on the quality of the results is the first step on the way to an objective judgement of the data and finally to a system with outstanding performance. Apparently, there is no universally valid and simple recipe for this. This is because the multiplicity of measurement tasks and objects does not allow a generalized rule. Nevertheless, there are meanwhile some serious attempts to tackle this challenge. Here we would particularly like to highlight the initiative “The Fair Data Sheet”39 and the German research project OptAssyst40. The Fair Data Sheet aims to contribute to more uniformity in definition and use of features for the characterization of the performance of optical measurement devices for surface metrology, to understand data sheets and to allow the user an objective comparison of different devices and measurement technologies. That is a good basis. However, a number of new questions arise, which are directed to the selection and description of the characteristic features and to the design of future systems.
Alongside the “Fair Data Sheet”39 some further valuable seminal work should be mentioned: Häusler and Ettl have published an article entitled “About the clever selection and application of optical sensors”41. The authors list valuable indicators that should be considered for the selection of a particular sensor. To them belong factors that influence the measurement uncertainty, measures to estimate the information content delivered by the special sensor, and requirements for the surface quality of the measurement object. Peter de Groot et al propose to apply the so-called Instrument Transfer Function (ITF)42,43 as an object independent measure to determine the limitations of the optical metrology instruments and extend this concept in44 to the 3D-ITF. They clearly point out that the optical resolution of a surface profiler is in general object-dependent. However, referring to the ITF it is possible to define regimes where its cut-off frequency enables metrics about the capability of a given instrument. In38 we discussed several measures that can be helpful for the estimation of the performance of an optical measurement system and its components. Furthermore, a system can be qualified by its degree of modularity, flexibility, user-friendliness, robustness against environmental influences, multi-modality performance (property to measure data of various modalities, such as dimensional and kinematic data), consumption of resources, ease of integration, and miniaturization. Established parameters for the assessment of the illumination source are their radiometric and coherence properties (spectral bandwidth, spectral power, coherence length, source dimension, noise level…).