Understanding Measurement Performance and Specifications

Understanding Measurement Performance and Specifications
Profile picture for user jarkko.ruonala
Jarkko Ruonala
Product Manager
Nov 13th 2017
Industrial Measurements

The quality of a measurement device is often evaluated by a simple question: How accurate is the measurement? While this question seems simple enough, the answer may not always be. Choosing the most suitable measurement instrument requires an understanding of the factors that contribute to the uncertainty of a measurement. This in turn provides an understanding of what is stated in the specifications – and what is not.

The performance of a measurement is defined by its dynamics (measurement range, response time), accuracy (repeatability, precision, and sensitivity), and stability (tolerance for aging and harsh environments). Of these, accuracy is often considered to be the most important quality; it is also one of the most difficult to specify.

Read the complete article below or watch this webinar on Understanding Your Specifications.


Sensitivity and Accuracy

The relationship between the change in measurement output and the change in reference value is called sensitivity. Ideally this relationship is perfectly linear, but in practice all measurements involve some imperfections or uncertainty.

Repeatability graph
Figure 1: Repeatability

The agreement of the measured value with the reference value is often simply called “accuracy”, but this is a somewhat vague term. Specified accuracy usually includes repeatability, which is the capability of the instrument to provide a similar result when the measurement is repeated under constant conditions. (Figure 1) However, it may or may not include the hysteresis, temperature dependency, non-linearity, and long-term stability. Repeatability alone is often a minor source of measurement uncertainty, and if the accuracy specification does not include other uncertainties it may give the wrong impression of the actual performance of the measurement.

Ideal transfer function graph
Figure 2: Transfer Function

The relationship between the measurement values and a known reference is often referred as the transfer function. (Figure 2) When a measurement is adjusted, this relationship is fine-tuned against a known calibration reference. Ideally, the transfer function is perfectly linear across the whole measurement range, but in practice most measurements involve some change in sensitivity, depending on the magnitude of the measurand. 

 

 

Non-linearity graph
Figure 3: Non-linearity

This type of imperfection is referred as non-linearity. (Figure 3) This effect is often emphasized at the extremes of the measurement range. It is therefore useful to check if the accuracy specification includes the non-linearity, and whether or not the accuracy is specified for the full measurement range. If it is not, this gives reason to doubt the measurement accuracy near the extremes.

 

 

Hysteresis graph
Figure 4: Hysteresis

Hysteresis is the change in measurement sensitivity that depends on the direction of the change in the measured variable. (Figure 4) This may be a significant cause of measurement uncertainty in the case of some humidity sensors, which are manufactured from material that bonds strongly to water molecules. If the specified accuracy does not indicate whether hysteresis is included, this source of measurement uncertainty will be left unspecified. In addition, if the calibration sequence is made in only one direction the effect of hysteresis will not be visible during calibration, and if hysteresis is omitted from the specification, it is also impossible to know the level of hysteresis in the measurement. Vaisala thin-film polymer sensors have negligible hysteresis, and this is always included in the specified accuracy.

Ambient conditions such as temperature and pressure also affect the accuracy of a measurement. If the temperature dependency is not specified and the operating temperature changes significantly, repeatability may be compromised. The specification may be given for full operating temperature, or for a specific, limited, or “typical” operating range. Specifications expressed in this way leave other temperature ranges unspecified.

Stability and Selectivity

The sensitivity of a measurement device may change over time due to aging. In some cases this effect may be accelerated by interference from chemicals or other environmental factors. If the long-term stability is not specified, or if the manufacturer is unable to provide recommendations for the typical calibration interval, the specification only actually indicates the accuracy at the time of calibration. A slow change in sensitivity (sometimes referred to as drift or creep) is harmful because it may be difficult to observe and might cause latent problems in control systems.

Selectivity is defined as the instrument’s insensitivity to changes in factors other than the actual measurand. For instance, humidity measurement performed in an atmosphere containing certain chemicals may be affected so that the measurement is actually influenced by the chemicals. This effect may be reversible or irreversible. The response to some chemicals may be exceedingly slow, and this cross-sensitivity to the chemical can easily be confused with drift. An instrument with good selectivity is not affected by changes in any factors other than the actual measurand.

Calibration and Uncertainty

If measurement readings deviate from the reference, the sensitivity of the instrument can be corrected. This is referred to as adjustment. Adjustment performed at one point is referred to as offset correction; two-point adjustment is a linear correction for both offset and gain (sensitivity). If the measurement has to be adjusted at several points, this might indicate poor linearity in the measurement, which has to be compensated for with non-linear multi-point corrections. In addition, if the adjustment points are the same as the calibration points, the quality of the measurement between adjustment points remains unverified.

Once the instrument has been adjusted, it is calibrated to verify its accuracy. Calibration, which is sometimes confused with adjustment, means comparing the measured value with a known reference, called a working standard. The working standard is the first element in the traceability chain, which means the series of calibrations and references up to the primary standard. Whereas a number of instruments calibrated against a certain reference may be accurate in relation to each other (high precision), the absolute accuracy with regards to the primary standard cannot be verified if the calibration uncertainty is not specified.

Traceability of calibration means that the chain of measurements, references, and related uncertainties up to the primary standard is known and professionally documented. This allows calculation of the uncertainty of the calibration reference and determination of the instrument’s accuracy. 

What Is “Accurate enough”?

When choosing a measurement instrument, it is necessary to consider the level of accuracy required. For instance, in standard ventilation control applications where relative humidity is adjusted for human comfort, ±5 %RH might be acceptable. However, in an application such as cooling tower control, more accurate control and smaller margins are required to increase operating efficiency. 

When the measurement is used as a control signal, repeatability and long-term stability (precision) are important, but absolute accuracy against a traceable reference is less significant. This is especially the case in a dynamic process, where the variations in temperature and humidity are large and the stability of the measurement, rather than absolute accuracy, is crucial. 

On the other hand, if for example the measurement is used to verify that the testing conditions inside a laboratory are comparable with other laboratories, the absolute accuracy and traceability of calibration is of utmost importance. An example of such an accuracy requirement is in the standard TAPPI/ANSI T402 – Standard conditioning and testing atmospheres for paper, board, pulp handsheets, and related products, which defines the testing condition in a paper testing laboratory as 23 ±1,0 °C and 50 ±2 %RH. If the specified accuracy of the measurement was e.g. ±1.5 %RH but the calibration uncertainty was ±1.6 %RH, the total uncertainty with regards to the primary calibration standard would exceed the specification, and the performed analyses – which are heavily dependent on ambient humidity inside the testing facility – would not be comparable and it would not be possible to confirm that the analyses were performed under standard conditions. 

Accuracy specification alone, without information about the calibration reference uncertainty, leaves the absolute accuracy of the instrument undefined.

Vaisala takes pride in providing professional and comprehensive specifications that are based on international standards, scientific testing methods, and empirical data. For customers this means comprehensive and reliable information that supports them in making the correct product choices.

Comparison of accuracy information in specifications of three different brands of high-accuracy humidity transmitter
Figure 5: Comparison of accuracy information in specifications of three different brands of high-accuracy humidity transmitter
 

 

Questions to Ask when Choosing an Instrument

  • Does the specified accuracy include all possible uncertainties: repeatability, non-linearity, hysteresis, and long-term stability?
  • Does the specified accuracy cover the full measurement range, or is the range for accuracy specification limited? Is the temperature dependency given in the specification, or is the temperature range defined in the accuracy specification?
  • Is the manufacturer able to provide a proper calibration certificate? Does the certificate include information on the calibration method, the references used, and professionally calculated reference uncertainty? Does the certificate include more than one or two calibration points, and is the whole measurement range covered?
  • Is the recommendation for the calibration interval given, or is the long-term stability included in the accuracy  specification? What level of selectivity is required in the intended operating environment? Is the manufacturer able to provide information on, or references for, the instrument’s suitability for the intended environment and application?

Glossary


 


 

 

 

 

 

Contributor:

Jarkko Ruonala

Jarkko Ruonala

Product Manager

Jarkko Ruonala is a Product Manager for Vaisala Industrial Measurements. He has a background in automation, instrumentation and process analyzers. He has a Master of Science degree in Industrial Engineering and Management from the University of Oulu, Finland.

Connect with Jarkko on LinkedIn

 

Add new comment