Study the "Amdahl's Law" section. Amdahl's law explains the limitations to performance gains achievable through parallelism. Over the last several decades or so, increases in computer performance have largely come from improvements to current hardware technologies and less from software technologies. Now, however, the limits to these improvements may be near. For significant continued performance improvement, either new physical technology needs to be discovered and/or transitioned to practice, or software techniques will have to be developed to get significant gains in computing performance.

In the equation for Amdahl's law, P is the fraction of code that can be parallelized (that is, the part that must be executed serially); S is the fraction of code that cannot be parallelized; and n is the number of processors. P + S is 1. If there are n processors, then P + S can be executed in the same time that P/n + S can be executed. Thus, the ratio of the time using 1 processor to the time of using n processors is 1/(P/n + S). This is the speedup in going from 1 processor to n processors.

The speedup is limited, even for large n. If n is 1, the speedup is 1. If n is very large, then the speedup is 1/S. If P is very small, then P/n is even smaller, and P/n + S is approximately S. The speedup is 1/S, but S is approximately S + P, which is 1. Therefore, the speed of execution of this code using 1 processor is about the same as using n processors.

Another way of writing Amdahl's law is 1/(P/n + [1 - P]). Thus, if P is close to 1, the speedup is 1/(P/n) or n/P, which is approximately n.

Apply Amdahl's law to better understand how it works by substituting a variety of numeric values into this equation and sketching the graph of the equation.

The speed of a program is the time it takes the program to execute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute sequentially (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:

S_{p} = T_{1} / T_{p}

Where:

- S
_{p} - The speedup obtained from using
*p*processors. - T
_{1} - The time it takes the program to be executed sequentially.
- T
_{p} - The time it takes the program to be executed in parallel using
*p*processors.

**Linear speedup** or **ideal speedup** is obtained when S_{p} = *p*. When running an algorithm with linear speedup, doubling the number of processors doubles the speed. As this
is ideal, it is considered very good scalability.

**Amdahl's law** states that the speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program.

As a formula:

S_{p} = 1 / (F + (1 - F) / *p*)

Where:

*p*- The number of processors.
- S
_{p} - The speedup obtained from using
*p*processors. - F
- The fraction of the program that must be executed sequentially. 0 ≤ F ≤ 1.

If *p* tends to ∞, the maximum speedup tends to 1 / F.

Source: Ariel Ortiz Ramírez, http://webcem01.cem.itesm.mx:8005/apps/s201311/tc5004/notes_parallelism/

This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 License.

Last modified: Tuesday, 14 July 2020, 10:28 PM