Thoughts on Calibration - by Martin Rosell

When we come to consider Load Cells and similar items one of the first points of consideration, after the rating of the unit is the accuracy of the readings. This would on initial impression seem to be a fairly simple point to answer. Or so you might think. Even the most cursory glance at the manufacturers literature reveals a confusing mix of methods, criteria and standards.

Taking a cross section of 9 UK based manufacturers of Load Cells we see accuracy claims from +/- 0.03% RO to +/- 1% of Actual Reading taking in such varied statistics as +/-0.04% of Full Range, +/- 0.1% of Rated Load, +/- 1.0% of FSD and +/- 0.04% of Maximum Range.

It’s almost as if there’s no standard way of measuring accuracy and each manufacturer chooses a different way of portraying their product to make it seem to be the best performing, in some cases the same manufacturer will choose different criteria for different units which can only add to the poor customer’s confusion. So what is the confused buyer supposed to do? Well, much like an eager teenager trying to decide between a Vauxhall Corsa or Peugeot 106 a little bit of detective work will reveal that the truth is the differences are subtle but can have a significant influence on the result.

BS EN ISO 7500-1:2015 is the relevant standard, Part 1 refers to Tension and Compression Testing Machines and the Calibration and Verification of the force-measuring system and has replaced the earlier BS 1610 (which now refers to the construction and testing of sewers). Within this publication we find the basis on which testing and calibration is undertaken. However as it is written for its unambiguity rather than to be a gripping roller coaster of a read, it is worth skipping straight to the highlights.

Under section 6.1 it states that:  

This calibration shall be carried out for each of the force ranges used and with all force indicators in use. Any accessory devices (e.g. pointer, recorder) that may affect the force-measuring system shall, where used, be verified in accordance with 6.4.6.

If the testing machine has several force-measuring systems, each system shall be regarded as a separate testing machine. The same procedure shall be followed for double-piston hydraulic machines.

The calibration shall be carried out using force-proving instruments with the following exception; if the force to be verified is below the lower limit of the smallest capacity force-proving device used in the calibration procedure, use known masses.

When more than one force-proving instrument is required to calibrate a force range, the maximum force applied to the smaller device shall be the same as the minimum force applied to the next force-proving instrument of higher capacity. When a set of known masses is used to verify forces, the set shall be considered as a single force-proving instrument.

The force-proving instrument shall comply with the requirements specified in ISO 376. The class of the instrument shall be equal to or better than the class for which the testing machine is to be calibrated. In the case of dead weights, the relative error of the force generated by these weights shall be within ± 0,1 %.

6.4.6 as mentioned in the first paragraph of 6.1 covers the calibration and verification of mechanical accessory devices such as pointers and recorders and states that:

In both cases the relative indication error, q, shall be calculated for the three normal series of

measurements, and the relative repeatability error, b, shall be calculated from the four series. The

values obtained for b and q shall conform to those listed in Table 2 for the class under consideration

 What this effectively means is that any accuracy quoted for the equipment being supplied cannot by definition have a lower margin of error than the equipment used to calibrate the load cell and if for example dead weights are used then any accuracy must show a greater margin of error than 0.1%, it is worth noting at this stage that Dead Weight measurements offer an obtainable uncertainty of 0.001% whereas Strain Gauged Hydraulic machines, being the most commonly used calibration devices, offer a maximum attainable uncertainty of 0.05%

How then can manufacturers claim accuracy rates of .02%? The answer lies in the presentation.
In some cases, rather than quote a usable and relevant figure such as a percentage of the actual reading which will give linear and relevant margin of error at any point on the scale many suppliers choose instead to quote a percentage of the Full Scale (described variously as Full Scale, FS, FRO, FSD and Max Range). The National Physical Laboratory explain the difference between these two measurement criteria on their website at http://www.npl.co.uk/reference/faqs/what-is-the-difference-between-'-reading'-and-'-full-scale-reading'-(faq-force)

The table below shows the uncertainties in the measurement of force, first given as 1 % of reading and second expressed as 1 % of full-scale reading to illustrate the difference;

different meanings of 1% uncertainty.png

As can be seen above quoting the % of Full Scale figure can mean the accuracy at the lower end of the gradation becomes so vague as to render any measurement meaningless whereas the percentage of reading figures maintain a reliable linearity of accuracy.

We can safely infer from the information above that accuracy figures are used not as a means of identifying the most reliable and accurate devices for measuring a particular rated load but rather as a marketing ploy in a modern form of sleight of hand trick to fool  the customer into believing a piece of equipment is “better” than its competition because the numbers can’t lie, except as we have seen they can and do, and until an industry wide standard method of measurement is agreed and adopted they will continue to offer obfuscation rather than clarification.