Read these sections, which explain error analysis when numbers are represented using a fixed number of digits. This issue mostly arises when using floating-point numbers. Real numbers are represented using a fixed number of bits. The number of bits is the precision of the representation. The accuracy of the representation is described in terms of the difference between the actual number and its representation using a fixed number of bits. This difference is the error of the representation. The accuracy becomes more significant because computations can cause the error to get so large that the result is meaningless and potentially even high risk depending on the application. These sections also explain how programming languages approach number representation.

The fact that floating point numbers can only represent a small fraction of all real numbers, means that in practical circumstances a computation will hardly ever be exact. In this section we will study the phenomenon that most real numbers can not be
represented, and what it means for the accuracy of computations. This is commonly called *round-off error analysis*.

Numbers that are too large or too small to be represented are uncommon: usually computations can be arranged so that this situation will not occur. By contrast, the case that the result of a computation between computer numbers (even something as simple
as a single addition) is not representable is very common. Thus, looking at the implementation of an algorithm, we need to analyze the effect of such small errors propagating through the computation. We start by analyzing the error between numbers
that can be represented exactly, and those close by that can not be.

If is a number and its representation in the computer, we call the *representation error* or *absolute representation error*, and
the *relative representation error*. Often we are not interested in the sign of the error, so we may apply the terms error and relative error to and
respectively.

Often we are only interested in bounds on the error. If is a bound on the error, we will write

For the relative error we note that

Let us consider an example in decimal arithmetic, that is , and with a 3-digit mantissa: . The number has a representation that depends on whether we round or truncate: , . The error is in the 4th digit: if then

Exercise. The number in this example had no exponent part. What are the error and relative error if there had been one?

The IEEE 754 standard, mentioned in section 3.2.4, does not only declare the way a floating point number is stored, it also gives a standard for the accuracy of operations such as addition, subtraction, multiplication, division. The model for arithmetic
in the standard is that of *correct rounding*: the result of an operation should be as if the following procedure is followed:

- The exact result of the operation is computed, whether this is representable or not;
- This result is then rounded to the nearest computer number.

If this statement sounds trivial or self-evident, consider subtraction as an example. In a decimal number system with two digits in the mantissa, the computation . Note that in an intermediate step the mantissa .094 appears, which has one more digit than the two we declared for our number system. The extra digit is called a *guard digit*.

Without a guard digit, this operation would have proceeded as , where would be rounded to .09, giving a final result of .01, which is almost double the correct result.

Exercise 3.5. Consider the computation , and assume again that numbers are rounded to fit the 2-digit mantissa. Why is this computation in a way a lot worse than the example?

One guard digit is not enough to guarantee correct rounding. An analysis that we will not reproduce here shows that three extra bits are needed.

Addition of two floating point numbers is done in a couple of steps. First the exponents are aligned: the smaller of the two numbers is written to have the same exponent as the larger number. Then the mantissas are added. Finally, the result is adjusted so that it again is a normalized number.

As an example, consider . Aligning the exponents, this becomes , and this result requires no final adjustment. We note that this computation was exact, but the sum has the same result, and here the computation is clearly not exact. The error is , and we note that the mantissa has 3 digits, so there clearly is a relation with the machine precision here.

In the example we see that after addition of the mantissas an adjustment of the exponent is needed. The error again comes from truncating or rounding the first digit of the result that does not fit in the mantissa: if is the true sum and the computed sum, then where, with a 3-digit mantissa .

Formally, let us consider the computation of , and we assume that the numbers are represented as . Then the sum is represented as

sign under the assumptions that all are small and of roughly equal size, and that both . We see that the relative errors are added under addition.

Floating point multiplication, like addition, involves several steps. In order to multiply two numbers and , the following steps are needed.

- The exponents are added: .
- The mantissas are multiplied: .
- The mantissa is normalized, and the exponent adjusted accordingly.

What happens with relative errors?

Subtraction behaves very differently from addition. Whereas in addition errors are added, giving only a gradual increase of overall roundoff error, subtraction has the potential for greatly increased error in a single operation.

For example, consider subtraction with 3 digits to the mantissa: . While the resultis exact, it has only one significant digit4. To see this, consider the case where the first operand .124 is actually the rounded result of a computation that should have resulted in .1235. In that case, the result of the subtraction should have been , that is, there is a 100% error, even though the relative error of the inputs was as small as could be expected. Clearly, subsequent operations involving the result of this subtraction will also be inaccurate. We conclude that subtracting almost equal numbers is a likely cause of numerical roundoff.

There are some subtleties about this example. Subtraction of almost equal numbers is exact, and we have the correct rounding behaviour of IEEE arithmetic. Still, the correctness of a single operation does not imply that a sequence of operations containing it will be accurate. While the addition example showed only modest decrease of numerical accuracy, the cancellation in this example can have disastrous effects.

From the above, the reader may got the impression that roundoff errors only lead to serious problems in exceptional circumstances. In this section we will discuss some very practical examples where the inexactness of computer arithmetic becomes visible in the result of a computation. These will be fairly simple examples; more complicated examples exist that are outside the scope of this book, such as the instability of matrix inversion.

The 'abc-formula'

As a practical example, consider the quadratic equation which has solutions . Suppose and and the '+' solution will be inaccurate. In this case it is better to compute and use .

Summing series

The previous example was about preventing a large roundoff error in a single operation. This example shows that even gradual buildup of roundoff error can be handled in different ways.

Consider the sum and assume we are working with single precision, which on most comput-ers means a machine precision of . The problem with this example is that both the ratio between terms, and the ratio of terms to partial sums, is ever increasing. In section 3.2.3 we observed that a too large ratio can lead to one operand of an addition in effect being ignored.

If we sum the series in the sequence it is given, we observe that the first term is 1, so all partial sums where N < 10000 are at least 1. This means that any term where gets ignored since it is less than the machine precision. Specifically, the last 7000 terms are ignored, and the computed sum is 1.644725. The first 4 digits are correct.

However, if we evaluate the sum in reverse order we obtain the exact result in single precision. We are still adding small quantities to larger ones, but now the ratio will never be as bad as one-to- , so the smaller number is never ignored. To see this, consider the ratio of two terms subsequent terms:

Since we only sum terms and the machine precision is , in the addition the second term will not be wholly ignored as it is when we sum from large to small.

Exercise. There is still a step missing in our reasoning. We have shown that in adding two subsequent terms, the smaller one is not ignored. However, during the calculation we add partial sums to the next term in the sequence. Show that this does not worsen the situation.

The lesson here is that series that are monotone (or close to monotone) should be summed from small to large, since the error is minimized if the quantities to be added are closer in magnitude. Note that this is the opposite strategy from the case of subtraction, where operations involving similar quantities lead to larger errors. This implies that if an application asks for adding and subtracting series of numbers, and we know a priori which terms are positive and negative, it may pay off to rearrange the algorithm accordingly.

Unstable algorithms

We will now consider an example where we can give a direct argument that the algorithm can not cope with problems due to inexactly represented real numbers.

Consider the recurrence . This is easily seen to be monotonically decreasing; the first 0 x 5 n term can be computed as . Performing the computation in 3 decimal digits we get:

We see that the computed results are quickly not just inaccurate, but actually nonsensical. We can analyze why this is the case.

If we define the error in the *n*-th step as

then

so . The error made by this computation shows exponential growth.

Linear system solving

Sometimes we can make statements about the numerical precision of a problem even without specifying what algorithm we use. Suppose we want to solve a linear system, that is, we have an matrix *A* and a vector *b* of size *n*,
and we want to compute the vector *x* such that *Ax = b*. (We will actually considering algorithms for this in chapter 5.) Since the vector will the result of some computation or measurement, we are actually dealing with a
vector , which is some perturbation of the ideal *b*:

The perturbation vector can be of the order of the machine precision if it only arises from representation error,or it can be larger, depending on the calculations that produced .

We now ask what the relation is between the exact value of *x*, which we would have obtained from doing an exact calculation with *A* and *b*, which is clearly impossible, and the computed value , which we get from computing
with A and . (In this discussion we will assume that *A* itself is exact, but this is a simplification.)

Writing , the result of our computation is now

or

Since *Ax = b*, we get . From this, we get

The quantity is called the *condition number* of a matrix. The bound then says that any perturbation in the right hand side can lead to a perturbation in the solution that is at most larger by the condition number of the matrix *A*. Note that it does not say that the perturbation in *x* needs to be anywhere close to that size, but we can not rule it out, and in some cases it indeed happens that this bound is attained.

Suppose that *b* is exact up to machine precision, and the condition number of *A* is . The bound is often interpreted as saying that the last 4 digits of x are unreliable, or that the computation 'loses 4 digits of accuracy'.

From the above example of summing a series we saw that addition in computer arithmetic is not associative. A similar fact holds for multiplication. This has an interesting consequence for parallel computations: the way a computation is spread over parallel
processors influences the result. For instance, consider computing the sum of a large number *N* of terms. With *P* processors at our disposition, we can let each compute *N/P* terms, and combine the partial results. We immediately
see that for no two values of *P* will the results be identical. This means that reproducibility of results in a parallel context is elusive.

Different languages have different approaches to storing integers and floating point numbers. In Fortran it is possible to specify the number of bytes that a number takes up: INTEGER*2, REAL*8. Often it is possible to write a code using only INTEGER, REAL, and use compiler flags to indicate the size of an integer and real number.

In C, the type identifiers have no standard length. For integers there is short int, int, long int, and for floating point float, double. The sizeof() operator gives the number of bytes used to store a datatype.C99, Fortran2003 Recent standards of the C and Fortran languages incorporate the C/Fortran interoperability stan-dard, which can be used to declare a type in one language so that it is compatible with a certain type in the other language.

Other systems have been proposed to dealing with the problems of inexact arithmetic on computers. One solution is extended precision arithmetic, where numbers are stored in more bits than usual. A common use of this is in the calculation of inner products of vectors: the accumulation is internally performed in extended precision, but returned as a regular floating point number. Alternatively, there are libraries such as GMPlib that allow for any calculation to be performed in higher precision.

Another solution to the imprecisions of computer arithmetic is 'interval arithmetic', where for each calculation interval bounds are maintained. While this has been researched for considerable time, it is not practically used other than through specialized libraries.

A fixed-point number can be represented as where is the integer part and *F* < 1 is the fractional part.

Another way of looking at this, is that a fixed-point number is an integer stored in *N + F* digits, with an implied decimal point after the first *N* digits.

Fixed-point calculations can overflow, with no possibility to adjust an exponent. Consider the multiplication, where and . This overflows if is more than the number of positions available for the integer part. (Informally, the number of digits of the product is the sum of the digits of the operands.) This means that, in a program that uses fixed-point, numbers will need to have a number of zero digits, if you are ever going to multiply them, which lowers the numerical accuracy. It also means that the programmer has to think harder about calculations, arranging them in such a way that overflow will not occur, and that numerical accuracy is still preserved to a reasonable extent.

So why would people use fixed-point numbers? One important application is in embedded low-power devices, think a battery-powered digital thermometer. Since fixed-point calculations are essentially identical to integer cal-culations, they do not require a floating-point unit, thereby lowering chip size and lessening power demands. Also, many early video game systems had a processor that either had no floating-point unit, or where the integer unit was considerably faster than the floating-point unit. In both cases, implementing non-integer calculations as fixed-point, using the integer unit, was the key to high throughput.

Another area where fixed point arithmetic is still used, is in signal processing. In modern CPUs, integer and floating point operations are of essentially the same speed, but converting between them is relatively slow. Now, if the sine function is implemented
through table lookup, this means that in sin(sin *x*) the output of a function is used to index the next function application. Obviously, outputting the sine function in fixed point obviates the need for conversion between real and integer quantities,
which simplifies the chip logic needed, and speeds up calculations.

Some programming languages have complex numbers as a native data type, others not, and others are in between. For instance, in Fortran you can declare

COMPLEX z1,z2, z(32)

COMPLEX*16 zz1, zz2, zz(36)

A complex number is a pair of real numbers, the real and imaginary part, allocated adjacent in memory. The first declaration then uses 8 bytes to store to REAL*4 numbers, the second one has REAL*8s for the real and imaginary part. (Alternatively, use DOUBLE COMPLEX or in Fortran90 COMPLEX(KIND=2) for the second line.)

By contrast, the C language does not natively have complex numbers, but both C99 and C++ have a complex.h header file5. This defines as complex number as in Fortran, as two real numbers.

Storing a complex number like this is easy, but sometimes it is computationally not the best solution. This becomes apparent when we look at arrays of complex numbers. If a computation often relies on access to the real (or imagi-nary) parts of complex numbers exclusively, striding through an array of complex numbers, has a stride two, which is disadvantageous (see section 1.2.4.3). In this case, it is better to allocate one array for the real parts, and another for the imaginary parts.

Exercise. Suppose arrays of complex numbers are stored the Fortran way. Analyze the memory access pattern of pairwise multiplying the arrays, that is, where a(), b(), c() are arrays of complex numbers.

Exercise. Show that an linear system *Ax = b* over the complex numbers can be written as a system over the real numbers. Hint: split the matrix and the vectors in their real and imaginary parts. Argue for
the efficiency of storing arrays of complex numbers as separate arrays for the real and imaginary parts.

Source: Victor Eijkhout, Edmond Chow, and Robert van de Geijn, https://s3.amazonaws.com/saylordotorg-resources/wwwresources/site/textbookuploads/5345_scicompbook.pdf

This work is licensed under a Creative Commons Attribution 3.0 License.

Last modified: Tuesday, 28 July 2020, 5:08 PM