Completion requirements
From section 4 of Chapter 3 of this resource, study the section titled "Amdahl's Law."
Limits and Costs of Parallel Programming
Amdahl's Law:
1 speedup = -------- 1 - P
1 speedup = ------------ P + S --- N
|
![]() ![]() |
- It soon becomes obvious that there are limits to the scalability of parallelism. For example:
speedup ------------------------------------- N P = .50 P = .90 P = .95 P = .99 ----- ------- ------- ------- ------- 10 1.82 5.26 6.89 9.17 100 1.98 9.17 16.80 50.25 1,000 1.99 9.91 19.62 90.99 10,000 1.99 9.91 19.96 99.02 100,000 1.99 9.99 19.99 99.90
"Famous" quote: You can spend a lifetime getting 95% of your code to be parallel, and never achieve better than 20x speedup no matter how many processors you throw at it!
- However, certain problems demonstrate increased performance by increasing the problem size. For example:
2D Grid Calculations 85 seconds 85% Serial fraction 15 seconds 15%
We can increase the problem size by doubling the grid dimensions and halving the time step. This results in four times the number of grid points and twice the number of time steps. The timings then look like:
2D Grid Calculations 680 seconds 97.84% Serial fraction 15 seconds 2.16%
- Problems that increase the percentage of parallel time with their size are more scalable than problems with a fixed percentage of parallel time.
Source: Blaise Barney, https://computing.llnl.gov/tutorials/parallel_comp/ This work is in the Public Domain.
Last modified: Wednesday, 15 July 2020, 9:55 PM