GB flag iconENCN flag iconZH

Webinars and Online Resources

Uncertainty, ISO/IEC 17025:2017 and decisions made on conformity

How measurement uncertainty is accounted for when stating conformity with a specified requirement.

When ISO/IEC 17025 was republished in 2017, the new version included a requirement that laboratories take account of uncertainty of measurement when reporting results against a specification or requirement.

Clause 3.7 of ISO/IEC 17025:2017 defines a ‘decision rule’ as a rule that describes how measurement uncertainty is accounted for when stating conformity with a specified requirement. The decision rule applied must be identified in the statement of conformity of the test report unless it is inherent in the requested specification or standard.

This article provides information on how SATRA applies the decision rule to its test reports.

What is uncertainty?

The science of measurement is known as ‘metrology’. When anything is measured, the very act of measurement leads to a fluctuation in the result. For example, when taking a measurement, the resolution of the equipment will be a limit to accuracy. Therefore, a resolution of ‘0.1’ makes it impossible to measure a quantity that is actually ‘0.665’ – the instrument is only able to indicate ‘0.6’ or ‘0.7’. Thus, there is an uncertainty of measurement. The very act of handling a steel rule will heat it and cause it to expand in length, leading to small fluctuations – an ‘uncertainty’.

There are many examples where small variables and constants introduce variation to repeated measurements. The consequence is that when repeated measures are made, the results obtained form a group around the actual real value. This group can be considered as a ‘statistical population’. Most results of the measure will be close to the real value, but some will drift away. The group of repeated results will most often – but not always – follow a statistical curve (a distribution). This curve is a ‘confidence interval’, a statistical dispersion of the possible results – a measure of the probability that the result reported is close to the actual real value. The statement ‘k=2’ simply means that the possible statistical variation of the result (the uncertainty) has been considered two standard deviations either side of the mean value. Hence, with a result reported with k=2, there is a 95 per cent confidence of the result being within the given uncertainty magnitude of the real measured quantity. When comparing results that are close to requirements, competence is required in the application of the calculated confidence of the measured result.

Applying uncertainty when reporting results

SATRA's guidelines provide recommendations that are based upon SATRA's knowledge and experience. The guidelines are intended to indicate conformance by providing information on the likely performance or characteristics of a property. As such, uncertainty of measurement is not applied when evaluating results against guideline recommendations.

Qualitative results – Where the result is a simple ‘yes’ or ‘no’ to the presence of something, uncertainty is not applied.

Visual or subjective assessments – In cases where the result is a pass or fail against a visual or subjective assessment, uncertainty cannot be applied to the final result. Where possible, SATRA will apply uncertainty to the test itself by assessing the test specimen after completion of both the standard test and a continuation of the test where the standard test parameter has been adjusted by a quantity equal to the calculated uncertainty. Results would be reported accordingly. The standard test parameter adjusted could, for instance, be load, time or number of cycles.

Numerical results – When reporting numerical results against a conformance statement, the effect of uncertainty of measurement on the result obtained is considered both positively and negatively, before a pass or failure or the allocation of a class or performance level is reported.

In most situations, the uncertainty of measurement is irrelevant to the interpretation of conformity, providing that the result obtained is not too close to the requirement. However, when the result lies close to a requirement limit, we are obliged to indicate that the uncertainty of the result may affect the interpretation of conformity.

When applying uncertainty, we use the statistical basis of k=2. This provides a coverage probability of approximately 95 per cent, which is also called the ‘expanded uncertainty’.

Where the result falls outside the ‘guard banding’ (the requirement plus or minus the uncertainty), the risk of the result being a false accept or false reject is minor, and in this case a pass or fail, class or level will be reported.

Where the result falls inside the guard banding, there is an increased risk that the result is a false accept or a false reject. In this instance, SATRA will not provide a pass or fail statement, or a class or level, but will include information in the notes in relation to the result obtained.

An example of this is if a requirement is stated as being a minimum of 100 units and the uncertainty of the test result is estimated to be ±5 units. In this situation, the ‘guard band’ (the requirement plus or minus the value of uncertainty associated with the test) is deemed as being 95 to 105 units. Any result falling within this range is considered as being ‘uncertain’, and statements such as ‘pass’ or ‘fail’ cannot be made with confidence (see figure 1).

Figure 1: An example of how uncertainty of measurement affects conformity decisions

With performance levels or classes, uncertainty is applied to the requirement of each level or class, with each having its own guard band.

How can we help?


Please contact for assistance with decision rules and ISO/IEC 17025:2017.