Uncertainty Is Certain!

Uncertainty exists in any measured quantity because measurements are always performed by a person or instrument. For example, if you are using a ruler to measure length, it is necessary to interpolate between gradations given on the ruler. This gives the uncertain digit in the measured length. While there may not be much deviation, what you estimate to be the last digit may not be the same as someone else's estimation. We need to account for this uncertainty when we report measured values.

When measurements are repeated, we can gauge their accuracy and precision. Accuracy tells us how close a measurement is to a known value. Precision tells us how close repeat measurements are to each other. Imagine accuracy as hitting the bullseye on a dartboard every time, while precision corresponds to hitting the "triple 20" consistently. Another example is to consider an analytical balance with a calibration error so that it reads 0.24 grams too high. Although measuring identical mass readings of a single sample would mean excellent precision, the accuracy of the measurement would be poor.

Read this text, which text describes how uncertainty comes about in measurements. It uses the example of a dartboard to differentiate between accuracy and precision.

In science, there are numbers and there are numbers. What we ordinarily think of as a number and will refer to here as a pure number is just that: an expression of a precise value. The first of these you ever learned were the counting numbers, or integers; later on, you were introduced to the decimal numbers, and the rational numbers, which include numbers such as 1/3 and π (pi) that cannot be expressed as exact decimal values.

The other kind of numeric quantity that we encounter in the natural sciences is a measured value of something – the length or weight of an object, the volume of a fluid, or perhaps the reading on an instrument. Although we express these values numerically, it would be a mistake to regard them as the kind of pure numbers described above.

image of a value scale

Confusing? Suppose our instrument has an indicator such as you see here. The pointer moves up and down so as to display the measured value on this scale. What number would you write in your notebook when recording this measurement?

Clearly, the value is somewhere between 130 and 140 on the scale, but the graduations enable us to be more exact and place the value between 134 and 135. The indicator points more closely to the latter value, and we can go one more step by estimating the value as perhaps 134.8, so this is the value you would report for this measurement.

Now here is the important thing to understand: although “134.8” is itself a number, the quantity we are measuring is almost certainly not 134.8 – at least, not exactly. The reason is obvious if you note that the instrument scale is such that we are barely able to distinguish between 134.7, 134.8, and 134.9. In reporting the value 134.8 we are effectively saying that the value is probably somewhere with the range 134.75 to 134.85. In other words, there is an uncertainty of ±0.05 unit in our measurement.

All measurements of quantities that can assume a continuous range of values (lengths, masses, volumes, etc.) consist of two parts: the reported value itself (never an exactly known number), and the uncertainty associated with the measurement.

By error, we do not mean just outright mistakes, such as incorrect use of an instrument or failure to read a scale properly; although such gross errors do sometimes happen, they usually yield results that are sufficiently unexpected to call attention to themselves.


Error in Reading Scales

When you measure a volume or weight, you observe a reading on a scale of some kind, such as the one illustrated above. Scales, by their very nature, are limited to fixed increments of value, indicated by the division marks. The actual quantities we are measuring, in contrast, can vary continuously, so there is an inherent limitation in how finely we can discriminate between two values that fall between the marked divisions of the measuring scale.

photo of a digital multi-meter

Scale-reading error is often classified as random error (see below), but it occurs so commonly that we treat it separately here.

The same problem remains if we substitute an instrument with a digital display; there will always be a point at which some value that lies between the two smallest divisions must arbitrarily toggle between two numbers on the readout display. This introduces an element of randomness into the value we observe, even if the true value remains unchanged.

Image of a value scale

The more sensitive the measuring instrument, the less likely it is that two successive measurements of the same sample will yield identical results. In the example we discussed above, distinguishing between the values 134.8 and 134.9 may be too difficult to do in a consistent way, so two independent observers may record different values even when viewing the same reading.


Parallax Error

One form of scale-reading error that often afflicts beginners in the science laboratory is failure to properly align the eye with the part of the scale you are reading. This gives rise to parallax error. Parallax refers to the change in the apparent position of an object when viewed from different points.

meniscus error

image of a beaker demonstrating parallax error

The most notorious example encountered in the introductory chemistry laboratory is failure to read the volume of a liquid properly in a graduated cylinder or burette. Getting all of their students trained to make sure their eye is level with the bottom of the meniscus is the lab instructors' hope and despair.

Image of Parallex - actual value vs. apparent value Image of a ruler measuring a block


Proper use of a measuring device can help reduce the possibility of parallax error. For example, a length scale should be in direct contact with the object (right), not above it as on the left.

Analog meters (those having pointer needles) are most accurate when read at about 2/3 of the length of the scale.

photo of an absorbance scale


Analog-type meters, unlike those having digital readouts, are also subject to parallax error. Those intended for high-accuracy applications often have a mirrored arc along the scale in which a reflection of the pointer needle can be seen if the viewer is not properly aligned with the instrument.


Random (Indeterminate) Error

random error


Each measurement is also influenced by a myriad of minor events, such as building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the instrument. These tiny influences constitute a kind of noise that also has a random character. Whether we are conscious of it or not, all measured values contain an element of random error.


Systematic Error

Suppose that you weigh yourself on a bathroom scale, not noticing that the dial reads “1.5 kg” even before you have placed your weight on it. Similarly, you might use an old ruler with a worn-down end to measure the length of a piece of wood. In both of these examples, all subsequent measurements, either of the same object or of different ones, will be off by a constant amount.

systemtic error


Unlike random error, which is impossible to eliminate, these systematic error (also known as determinate error) is usually quite easy to avoid or compensate for, but only by a conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the measuring instrument. However, once systematic error has found its way into the data, it is can be very hard to detect.


The Difference between Accuracy and Precision

We tend to use these two terms interchangeably in our ordinary conversation, but in the context of scientific measurement, they have very different meanings:

Accuracy refers to how closely the measured value of a quantity corresponds to its true value.

Precision expresses the degree of reproducibility, or agreement between repeated measurements.

Accuracy, of course, is the goal we strive for in scientific measurements. Unfortunately, however, there is no obvious way of knowing how closely we have achieved it; the true value, whether it be of a well-defined quantity such as the mass of a particular object, or an average that pertains to a collection of objects, can never be known – and thus we can never recognize it if we are fortunate enough to find it.


Four Scenarios

A target on a dart board serves as a convenient analogy. The results of four sets of measurements (or four dart games) are illustrated below. Each set is made up of ten observations (or throws of darts.) Each red dot corresponds to the point at which a dart has hit the target – or alternatively, to the value of an individual observation.

For measurements, assume the true value of the quantity being measured lies at the center of each target.

Now consider the following four sets of results:

accuracy and precision


Number 1Right on! You win the dart game, and get an A grade on your measurement results.

Number 2Your results are beautifully replicable, but your measuring device may not have been calibrated properly or your observations suffer from a systematic error of some kind. Accuracy: F, Precision, A; overall grade C.

Number 3Extremely unlikely, and probably due to pure luck; the only reason for the accurate mean is that your misses mostly canceled out. Grade D.

Number 4Pretty sad; consider switching to music or politics – or have your eyes examined.

Note carefully that when we make real measurements, there is no dart board or target that enables one to immediately judge the quality of the result. If we make only a few observations, we may be unable distinguish between any of these scenarios.



Source: Stephen Lower, http://www.chem1.com/acad/webtext/matmeasure/mm-2.html#SEC1
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Last modified: Wednesday, May 12, 2021, 4:09 PM