What is calibration

Calibration is widely used in a number of industries – but what exactly is it, and how does it actually work? In basic terms, calibration is the process of comparing measurements. Measurements are taken on two devices – the first measurement being that of a known magnitude or correctness and the second one being a similar measurement, on another device. The device holding the known magnitude or correctness is referred to as the ‘standard’. The second device is commonly known as the test unit, unit under test, test instrument, or any other name given to the device that is being calibrated.

The design process

The process of calibration commences with the actual design of the measuring instrument requiring calibration. To work – the design must be able to "hold a calibration" through its calibration interval. Or, to put it another way, the device must be capable of measurements that are deemed to be "within engineering tolerance" when used over a period of time, in the specified environmental conditions. If a design adheres to these characteristics, there is a higher chance that the measuring instruments will perform as anticipated.

Of course, the exact process and mechanism for attributing any such tolerance values varies substantially, depending on many factors, including the industry sector and the country. The manufacturer of the measuring equipment will generally assign it with a measurement tolerance level – this will indicate a calibration interval and will specify the environmental and storage ranges. The user of the device will then usually assign the actual calibration interval – based on the likely usage level for a unit of measuring equipment.

As an example; a common interval used throughout industries in the US is six months, based on 8–12 hours of use, 5 days per week. If the same instrument was to be used 24/7 – it would generally be allocated with a shorter interval. The allocation of calibration intervals can also be the result of other processes, such as the analysing of results from previous calibrations.

Defining calibration

Following the design phase, the next step is to define the calibration process. Certainly, the identification of standards is the most visible and obvious part of the entire process. For optimal performance, the standard should have less than ¼ of the measurement inconclusiveness of the actual device being calibrated. If this rule is adhered to, the final measurement of uncertainty, using all of the involved standards will be deemed as insignificant, as the final measurement will also adhere to the 4:1 ratio.

The ¼ rule, or 4:1 ratio as it is also known, was perhaps first formally recognised in the Handbook 52 – an accompaniment to the MIL-STD-45662A - an early US Department of Defence metrology program specification. Prior to this, the 10:1 ratio was widely used from the 1950s until the 1970s, however, as technology rapidly advanced, this ratio was deemed impossible for the majority of electronic measurements.

The actual methods of calibration have changed very little in recent years. The above processes still form the core of many calibration processes – from the testing of industrial equipment to the calibration of car components.